Lab
Quant Bootcamp
Quant Marathon
Certificate
Why ARPM?
Store
Login
English
中文
Español
العربیه
Русский
Deutsch
Italiano
Français
Português
Srpski
Login
Home
Lab
Summary
Topics
Channels
Sign up
Enter the Lab
Quant Bootcamp
Summary
Program
Benefits
Reviews
Sign Up
Quant Marathon
Summary
Program
Benefits
Reviews
Sign Up
Certificate
Summary
Testing
Sign up
Why ARPM?
Why?
Who is ARPM for?
Clients & Partners
About ARPM
FAQ
Store
All
Bootcamp
Marathon
Data Science courses
Lab
Certificate
Contact us
Summary
Topics
Channels
Sign up
Enter the Lab
Summary
Program
Benefits
Reviews
Sign Up
Summary
Program
Benefits
Reviews
Sign Up
Summary
Testing
Sign up
Why?
Who is ARPM for?
Clients & Partners
About ARPM
FAQ
All
Bootcamp
Marathon
Single courses
Lab
Certificate
Video lectures
Theory
Case studies
Data animations
Code
Documentation
Slides
Exercises
TOC
Search
Quantitative finance
E.I. Valuation
[ E.0a ]
Valuation foundations
[ E.0a.1 ]
Value from jointly lognormal SDF and payoff
[ E.0a.2 ]
Non-linear valuation from jointly lognormal SDF and payoff
[ E.0a.3 ]
Cash-flow record date
[ E.0a.4 ]
Jump rule. Theory
[ E.0a.5 ]
Cash-flow and P&L additivity
[ E.0a.6 ]
Additivity of the reinvested cumulative cash-flow
[ E.0a.7 ]
Generalized forward cash-flow-adjusted value
[ E.0a.8 ]
Payoff and forward cash-flow adjusted value
[ E.0a.9 ]
Forward and backward cash-flow-adjusted values
[ E.0a.10 ]
Conversion of the P&L from local to base currency
[ E.0a.11 ]
Fair value in base currency
[ E.0a.12 ]
Conversion rule for linear returns from local currency to base currency
[ E.0a.13 ]
Conversion rule for linear returns as a special case of the total simple portfolio P&L
[ E.0a.14 ]
The actual exchange rate
[ E.0a.15 ]
Coupon bond dirty price
[ E.0a.16 ]
P&L of a futures
[ E.0b ]
Linear pricing theory: core
[ E.0b.1 ]
Linear pricing operator
[ E.0b.2 ]
Equivalent statements of absence of arbitrage
[ E.0b.3 ]
Law of one price, linearity and arbitrage
[ E.0b.4 ]
Stochastic discount factor and absence of arbitrage
[ E.0b.5 ]
Kernel stochastic discount factor
[ E.0b.6 ]
Minimum relative entropy numeraire measure and stochastic discount factor
[ E.0b.7 ]
Risk neutral probability measure
[ E.0b.8 ]
One period risk-neutral probability measure
[ E.0b.9 ]
Continuous time risk-neutral probability measure
[ E.0b.10 ]
Continuous rebalancing limit: risk-neutral density
[ E.0b.11 ]
Risk-neutral density in the Black-Scholes-Merton model
[ E.0b.12 ]
Derivation of the fundamental theorem of asset pricing
[ E.0b.13 ]
Maximum Sharpe Ratio portfolio
[ E.0b.14 ]
Maximum Sharpe ratio portfolio weights
[ E.0b.15 ]
Equivalent formulation of security market line in terms of Sharpe ratio
[ E.0b.16 ]
Regression model for the stochastic discount factor
[ E.0b.17 ]
About the “insurance”term appearing in the covariance principle
[ E.0b.18 ]
From Buhlmann transform to the covariance principle
[ E.0c ]
Linear pricing theory: further assumptions
[ E.0c.1 ]
Inverse of the payoff matrix of the European call option basis
[ E.0c.2 ]
Payoff-replicating portfolio with respect to the European call option basis
[ E.0c.3 ]
European payoffs as combinations of call option payoffs
[ E.0c.4 ]
Common payoffs as combinations of call payoffs: butterfly option
[ E.0c.5 ]
Common payoffs as combinations of call payoffs: strangle option
[ E.0c.6 ]
Common payoffs as combinations of call payoffs: straddle option
[ E.0c.7 ]
Common payoffs as combinations of call payoffs: condor option
[ E.0c.8 ]
Parabola payoff as combinations of call payoffs
[ E.0c.9 ]
Current value of an arbitrary payoff generated by call options
[ E.0c.10 ]
Each instrument in a complete market is a combination of Arrow-Debreu securities
[ E.0c.11 ]
Arrow-Debreu securities as butterflies
[ E.0c.12 ]
Butterfly in terms of central difference second order derivative operator
[ E.0c.13 ]
Stochastic discount factor identified by the European call option basis
[ E.0c.14 ]
Arrow-Debreu securities under continuum
[ E.0c.15 ]
Conditional stochastic discount factor
[ E.0c.16 ]
Second derivative of the call option value with respect to the strike
[ E.0c.17 ]
Stochastic discount factor in the Black-Scholes-Merton framework
[ E.0c.18 ]
Distribution of the stochastic discount factor in the Black-Scholes-Merton framework
[ E.0c.19 ]
From the security market line to the standard CAPM
[ E.0c.20 ]
Buhlmann pricing equation in terms or returns
[ E.0c.21 ]
Systematic-idiosyncratic linear factor model in a complete market
[ E.0c.22 ]
Systematic-idiosyncratic linear factor model with zero residuals
[ E.0c.23 ]
Derivation of the APT model from the stochastic discount factor model
[ E.0c.24 ]
From APT to the security market line
[ E.0c.25 ]
Pricing equation as conditional expectation: proof
[ E.0c.26 ]
Alternative formulation intertemporal pricing equation
[ E.0c.27 ]
The martingale pricing formula
[ E.0d ]
Non-linear pricing theory
[ E.0d.1 ]
Shift principles: calibration
[ E.0d.2 ]
Exponential principle: calibration
[ E.0d.3 ]
Wang distortion principle: calibration
[ E.0d.4 ]
Esscher principle: calibration
[ E.0e ]
Valuation implementation
[ E.0e.1 ]
The Gordon growth model
[ E.0e.2 ]
The value of an oil field production
[ E.0e.3 ]
Enterprise value using comparables
[ E.0e.4 ]
Value of a pure endowment contract. Theory
[ E.0e.5 ]
Value of a pure endowment contract. Application
[ E.0e.6 ]
Non-life insurance valuation in the regulatory risk framework
[ E.0e.7 ]
Non-life insurance valuation with homogeneous independent claims
[ E.0e.8 ]
Non-life insurance valuation. Application
E.II. The “Checklist”
[ E.1 ]
Risk drivers identification
[ E.1.1 ]
ODE for a perpetual American call option
[ E.1.2 ]
Solution to the ODE for a perpetual American call option
[ E.1.3 ]
Perpetual American option with arithmetic Brownian motion underlying
[ E.1.4 ]
Inverse-call transformation
[ E.1.5 ]
The forward rate
[ E.1.6 ]
Continuously compounded forward rate
[ E.1.7 ]
Instantaneous forward rate
[ E.1.8 ]
Spot rate as average of forward rates
[ E.1.9 ]
Duration-times-spread
[ E.1.10 ]
First derivative of the maximum function
[ E.1.11 ]
The canonical basis property of the Dirac delta function applied to the maximum function
[ E.1.12 ]
Common payoffs as combinations of call payoffs: put option
[ E.1.13 ]
Forward start variance swap payoff as calendar-weighted average of spot variance payoffs
[ E.1.14 ]
Value of forward variance swap
[ E.1.15 ]
Alternative formulation of the fair value of the realized variance
[ E.1.16 ]
Variance swap fair value as combination of options
[ E.1.17 ]
Rolling value versus implied volatility
[ E.1.18 ]
Parsimonious SVI parametrization (I)
[ E.1.19 ]
Parsimonious SVI parametrization (II)
[ E.1.20 ]
Parsimonious SVI parametrization (III)
[ E.1.21 ]
Parsimonious SVI parametrization (IV)
[ E.1.22 ]
SVI parameters behaving as random walks
[ E.1.23 ]
Vasicek parametrization of the yield curve
[ E.2 ]
Quest for invariance
[ E.2.1 ]
Market efficiency and random walk
[ E.2.2 ]
The Poisson process and the distribution of the waiting times in high frequency trading
[ E.2.3 ]
Moment generating function and independence
[ E.2.4 ]
Stochastic volatility model with Student t distribution
[ E.2.5 ]
Equivalence between stochastic mean/volatility and mixture of distributions
[ E.2.6 ]
Equivalence between stochastic mean/volatility and mixture of distributions (I)
[ E.2.7 ]
Equivalence between stochastic mean/volatility and mixture of distributions (II)
[ E.2.8 ]
Equivalence between stochastic mean/volatility and mixture of distributions (III)
[ E.2.9 ]
Compound probability versus uncountable mixture
[ E.2.10 ]
AR(1) as a Markov process
[ E.2.11 ]
Time homogeneity for AR(1) process
[ E.2.12 ]
Discrete mean reverting, but not stationary, risk drivers
[ E.2.13 ]
Autocorrelation of a time-homogeneous Markov chain
[ E.2.14 ]
Fast decay of time-homogeneous Markov chains autocorrelations[work in progress]
[ E.2.15 ]
Invariant of a Markov chain
[ E.2.16 ]
Calibration of a structural model
[ E.2.17 ]
Structural credit models in terms of return on equity
[ E.2.18 ]
Log-leverage, linear return, default probability under Merton’s assumptions
[ E.2.19 ]
Markov chain-structural models identification
[ E.2.20 ]
Squared volatility as a function of past squared increments in GARCH(1,1) model
[ E.2.21 ]
Stationarity of GARCH
[ E.2.22 ]
VAR(1) as special case of Kalman filter
[ E.2.23 ]
Replicating a cointegrated combination of yields with a portfolio of bonds
[ E.2.24 ]
Mixture of invariants model as hidden Markov model
[ E.3 ]
Estimation
[ E.3.1 ]
Glivenko-Cantelli theorem: theory
[ E.3.2 ]
Minimum relative entropy as best between Gaussian versus exponential kernel
[ E.3.3 ]
Generalized Glivenko-Cantelli theorem
[ E.3.4 ]
Kernel with flexible probabilities mean estimation
[ E.3.5 ]
Regularized pdf
[ E.3.6 ]
Kernel with flexible probabilities and historical with flexible probability estimates comparison
[ E.3.7 ]
Relative entropy vs. maximum likelihood
[ E.3.8 ]
MLFP estimators for elliptical variables
[ E.3.9 ]
MLFP estimators for the Student t distribution. Theory
[ E.3.10 ]
Conditional excess distribution
[ E.3.11 ]
Generalized Pareto distribution I
[ E.3.12 ]
Influence function as the limit of sensitivity curve
[ E.3.13 ]
Influence function of the sample covariance: preliminary computation
[ E.3.14 ]
Influence function of the sample covariance
[ E.3.15 ]
Influence function of maximum likelihood estimators
[ E.3.16 ]
Influence function of location and dispersion MLFP estimators for elliptical distributions
[ E.3.17 ]
Influence function of ML estimators of location-dispersion under t
[ E.3.18 ]
Influence function of M-estimators
[ E.3.19 ]
M-estimators: location and dispersion
[ E.3.20 ]
Minimum volume ellipsoid
[ E.3.21 ]
Minimum covariance determinant
[ E.3.22 ]
Minimum volume ellipsoid and minimum covariance determinant algorithm
[ E.3.23 ]
Method of moments with flexible probabilities: reflected shifted lognormal. Theory
[ E.3.24 ]
Equivalent formulation of conditional excess distribution
[ E.3.25 ]
Maximum likelihood estimation from generalized method of moments
[ E.3.26 ]
Over-specified formulation of the generalized method of moments
[ E.3.27 ]
Robust estimation of the covariance matrix: rescaled HBFP ellipsoid
[ E.3.28 ]
ML-estimation of transition probability matrix for credit migration modeling
[ E.3.29 ]
MLFP estimation of transition probability matrix for credit migration modeling
[ E.3.30 ]
Expectation-maximization with flexible probabilities for missing values. Theory
[ E.3.31 ]
Maximum likelihood with flexible probabilities for different-length series. Theory
[ E.3.32 ]
Realized and empirical variance
[ E.3.33 ]
MLFP estimation of GARCH(1,1) with normal innovations
[ E.3.34 ]
Exponentially weighted moving average updating
[ E.3.35 ]
Dirichlet distribution
[ E.3.36 ]
Invariants distribution from the estimated standardized invariants distribution
[ E.3.37 ]
Conditional distribution of two univariate normal invariants
[ E.3.38 ]
Exponential family invariants: conjugate distribution
[ E.3.39 ]
Exponential family invariants: posterior distribution
[ E.3.40 ]
Exponential family invariants: predictive distribution
[ E.3.41 ]
Pdf of the information set
[ E.3.42 ]
Normal-inverse-Wishart location-dispersion: posterior distribution
[ E.3.43 ]
Normal-inverse-Wishart location-dispersion: predictive distribution
[ E.3.44 ]
Normal-inverse-Wishart location-dispersion: mode
[ E.3.45 ]
Normal-inverse-Wishart location-dispersion: modal dispersion
[ E.3.46 ]
Inverse-Wishart dispersion: mode
[ E.3.47 ]
Inverse-Wishart dispersion: modal dispersion
[ E.3.48 ]
Normal-inverse-Wishart location-dispersion: marginal distribution of location
[ E.3.49 ]
Independence of sample mean and covariance
[ E.3.50 ]
Distribution of the sample mean
[ E.3.51 ]
Distribution of the sample covariance
[ E.3.52 ]
The Marchenko-Pastur approximation: the general case
[ E.3.53 ]
Expectation shrinkage
[ E.3.54 ]
Singular covariance matrix
[ E.3.55 ]
Spectrum analysis
[ E.3.56 ]
Distance for sparse matrix shrinkage of correlation
[ E.3.57 ]
Distance for sparse matrix shrinkage of correlation: computations
[ E.3.58 ]
Stein’s lemma
[ E.3.59 ]
Shrinkage estimator of location
[ E.3.60 ]
Shrinkage estimator of dispersion
[ E.3.61 ]
Shrinkage estimator of dispersion: spectrum
[ E.3.62 ]
Distance matrix for correlation clustering
[ E.3.63 ]
Conditional covariance of normal variables
[ E.3.64 ]
Covariance and correlation of rescaled/normalized random variables
[ E.3.65 ]
Copula of Markov chain’s invariants
[ E.3.66 ]
Correlation of returns via GARCH residuals
[ E.4 ]
Projection
[ E.4.1 ]
Distribution of the sum of independent variables
[ E.4.2 ]
Square-root rule for a generic stochastic process
[ E.4.3 ]
Non-central moments to central moments
[ E.4.4 ]
Cumulant projection
[ E.4.5 ]
Central moments of a normal random variable
[ E.4.6 ]
Projection by averaging the historical non-overlapping distribution
[ E.4.7 ]
Hybrid Monte Carlo-historical projection: implementation
[ E.5 ]
Pricing at the horizon
[ E.5.1 ]
Dynamics and distribution of the stock value under the geometric Brownian motion assumption
[ E.5.2 ]
P&L of a forward contract
[ E.5.3 ]
Currency carry
[ E.5.4 ]
Foreign exchange carry trade
[ E.5.5 ]
Annualized carry return of a zero-coupon bond (theory)
[ E.5.6 ]
Annualized carry return of a bond (theory)
[ E.5.7 ]
Carry of a variance swap
[ E.5.8 ]
Greeks of equity P&L with stock value as risk driver
[ E.5.9 ]
Greeks of equity P&L with stock log-value as risk driver
[ E.5.10 ]
Bond Greeks
[ E.5.11 ]
Bond yield
[ E.5.12 ]
Bond convexity
[ E.5.13 ]
M-square
[ E.5.14 ]
Parallel shift of the yield curve for the Taylor approximation of the P&L of a coupon bond
[ E.5.15 ]
Equivalent definitions of effective duration and convexity
[ E.5.16 ]
Taylor approximation of the P&L of a coupon bond under parallel shifts
[ E.5.17 ]
Taylor approximation of variance swap P&L
[ E.5.18 ]
Global quadratic approximation for P&L
[ E.5.19 ]
Analytical distribution of the P&L at the horizon: MVOU drivers and Taylor approximation
[ E.5.20 ]
Elliptical risk drivers: location and dispersion of the P&L approximated at first order
[ E.5.21 ]
Analytical distribution of the joint P&L for stocks with normal compounded returns
[ E.6 ]
Aggregation
[ E.6.1 ]
Normal P&L’s imply normal returns
[ E.6.2 ]
Equally weighted portfolio
[ E.6.3 ]
Generator of ex-ante performance of elliptical risk drivers
[ E.6.4 ]
Regulatory credit framework: one-factor model
[ E.6.5 ]
CVA computation under simplifying assumptions
[ E.6.6 ]
Regulatory credit framework: conditional expectation of the portfolio P&L (II)
[ E.6.7 ]
Conditional log-characteristic function of single counterparty P&L in CreditRisk+
[ E.6.8 ]
Unconditional log-characteristic function of the portfolio P&L in CreditRisk+
[ E.6.9 ]
Minimum collateral
[ E.6.10 ]
Gross and net exposure
[ E.7 ]
Ex-ante evaluation
[ E.7.1 ]
Properties of the ex-ante performance
[ E.7.2 ]
Strong dominance implies arbitrage
[ E.7.3 ]
Estimability and monotonicity imply weak dominance consistency
[ E.7.4 ]
Translation invariance implies constancy
[ E.7.5 ]
Expected value: consistency with second order dominance
[ E.7.6 ]
Expected value: consistency with order q dominance
[ E.7.7 ]
The negative variance is concave
[ E.7.8 ]
The negative variance and the mean-variance trade-off are not consistent with order q dominance
[ E.7.9 ]
The negative variance and the mean-variance trade-off are not comonotonic additive
[ E.7.10 ]
The negative variance is not risk averse
[ E.7.11 ]
The negative variance and the mean-variance trade-off are not super-additive
[ E.7.12 ]
Expectation, variance jointly elicitable
[ E.7.13 ]
The mean-variance trade-off is translation invariant
[ E.7.14 ]
The mean variance trade-off is joint elicitable with the variance
[ E.7.15 ]
The fundamental risk quadrangle: subquantile
[ E.7.16 ]
Certainty-equivalent: uniqueness
[ E.7.17 ]
Certainty-equivalent: estimability
[ E.7.18 ]
Certainty-equivalent: monotonicity (increasing utility)
[ E.7.19 ]
Certainty-equivalent: consistency with weak dominance (increasing utility)
[ E.7.20 ]
Certainty-equivalent: consistency with order q dominance
[ E.7.21 ]
Certainty-equivalent: constancy
[ E.7.22 ]
Certainty-equivalent: money-equivalence
[ E.7.23 ]
Certainty-equivalent: positive homogeneity of degree 1 (power utility)
[ E.7.24 ]
Certainty-equivalent: translation invariance (exponential utility)
[ E.7.25 ]
Certainty-equivalent: additivity (linear utility)
[ E.7.26 ]
Certainty-equivalent: comonotonic additivity (linear utility)
[ E.7.27 ]
Certainty-equivalent: risk aversion, risk propensity and risk neutrality
[ E.7.28 ]
Relation between Arrow-Pratt risk aversion function and utility function
[ E.7.29 ]
Certainty-equivalent and positive affine transformations of the utility function
[ E.7.30 ]
Certainty-equivalent (quadratic normal distribution)
[ E.7.31 ]
Certainty-equivalent (elliptical distribution)
[ E.7.32 ]
The value at risk
[ E.7.33 ]
Quantile (VaR) satisfaction measure: estimability
[ E.7.34 ]
Quantile (VaR) satisfaction measure: monotonicity
[ E.7.35 ]
Quantile (VaR) satisfaction measure: consistency with weak dominance
[ E.7.36 ]
Quantile (VaR) satisfaction measure: violation of consistency with order q dominance
[ E.7.37 ]
Quantile (VaR) satisfaction measure: constancy
[ E.7.38 ]
Quantile (VaR) satisfaction measure: money-equivalence
[ E.7.39 ]
Quantile (VaR) satisfaction measure: positive homogeneity of degree 1
[ E.7.40 ]
Quantile (VaR) satisfaction measure: translation invariance
[ E.7.41 ]
Quantile (VaR) satisfaction measure: violation of super-additivity
[ E.7.42 ]
Quantile (VaR) satisfaction measure: comonotonic additivity
[ E.7.43 ]
Quantile (VaR) satisfaction measure: violation of concavity and convexity
[ E.7.44 ]
Quantile (VaR) satisfaction measure: violation of risk-aversion, risk-seeking and risk-neutrality
[ E.7.45 ]
Quantile (VaR) satisfaction measure of normally distributed ex-ante performances satisfies super-additivity
[ E.7.46 ]
Central moments of an affine transformation of a multivariate random variable
[ E.7.47 ]
Variance of an affine transformation of a multivariate random variable
[ E.7.48 ]
Expectation, standard deviation and skewness of a portfolio P&L under lognormality
[ E.7.49 ]
The expected shortfall as sub-quantile
[ E.7.50 ]
Sub-quantile satisfaction measures are monotone
[ E.7.51 ]
Sub-quantile satisfaction measures: consistency with weak dominance
[ E.7.52 ]
Sub-quantile satisfaction measures: constancy
[ E.7.53 ]
Sub-quantile satisfaction measures: money-equivalence
[ E.7.54 ]
Sub-quantile satisfaction measures: positive homogeneity of degree 1
[ E.7.55 ]
Sub-quantile satisfaction measures: translation invariance
[ E.7.56 ]
Sub-quantile satisfaction measure: super additivity
[ E.7.57 ]
Sub-quantile satisfaction measures: comonotonic additivity
[ E.7.58 ]
Sub-quantile satisfaction measures: risk-aversion
[ E.7.59 ]
Sub-quantile satisfaction measures: violation of consistency with order q dominance
[ E.7.60 ]
Spectral satisfaction measures: estimability
[ E.7.61 ]
Spectral satisfaction measures are monotone
[ E.7.62 ]
Spectral satisfaction measures: consistency with weak dominance
[ E.7.63 ]
Spectral satisfaction measures: constancy
[ E.7.64 ]
Spectral satisfaction measures: money-equivalence
[ E.7.65 ]
Spectral satisfaction measures: positive homogeneity of degree 1
[ E.7.66 ]
Spectral satisfaction measures: translation invariance
[ E.7.67 ]
Spectral satisfaction measures: comonotonic additivity
[ E.7.68 ]
Spectral satisfaction measures: violation of consistency with order q dominance
[ E.7.69 ]
Spectral satisfaction measures: violation of super-additivity
[ E.7.70 ]
Spectral satisfaction measures: violation of concavity and convexity
[ E.7.71 ]
Spectral/distortion satisfaction measure weights (scenario-probability distribution)
[ E.7.72 ]
Alternative representation of spectral satisfaction measure
[ E.7.73 ]
Quantile (VaR) satisfaction measure: distortion function
[ E.7.74 ]
Sub-quantile satisfaction measure: distortion function
[ E.7.75 ]
The Wang expectation: distortion function
[ E.7.76 ]
The Buhlmann expectation is not a distortion expectation
[ E.7.77 ]
Equivalence between spectral and distortion satisfaction measures
[ E.7.78 ]
Spectral measures as weighted averages of expected shortfalls
[ E.7.79 ]
Equivalent definitions of monotonicity
[ E.7.80 ]
The mean-lower partial moment root is coherent
[ E.7.81 ]
Dual representation of the sub-quantile satisfaction measure in the scenario-probability framework
[ E.7.82 ]
Worst possible measure for the expected shortfall in the scenario-probability framework
[ E.7.83 ]
Coherent spectral satisfaction measures as distortions
[ E.7.84 ]
Coherent spectral satisfaction measures: super-additivity
[ E.7.85 ]
Mean-lower partial moment trade-off: violation of comonotonic additivity
[ E.7.86 ]
Coherent satisfaction measures: consistency with weak dominance
[ E.7.87 ]
Coherent satisfaction measures: constancy
[ E.7.88 ]
Coherent satisfaction measures: money-equivalence
[ E.7.89 ]
Coherent representation of coherent spectral measures
[ E.7.90 ]
Coherent spectral measures: characterization
[ E.7.91 ]
Worst case representation of expectiles
[ E.7.92 ]
Convex combinations of coherent satisfaction measures
[ E.7.93 ]
The Wang distortion expectation
[ E.7.94 ]
The proportional hazards distortion expectation
[ E.7.95 ]
Cornish-Fisher approximation for spectral satisfaction measures
[ E.7.96 ]
Extreme value theory: approximation of the sub-quantile satisfaction measure
[ E.7.97 ]
Sub-quantile satisfaction measure weights (scenario-probability distribution)
[ E.7.98 ]
Derivative of an indefinite integral
[ E.7.99 ]
Relation between the omega and the kappa ratio
[ E.7.100 ]
Economic capital
[ E.7.101 ]
The Buhlmann expectation is a distortion expectation
[ E.7.102 ]
First order approximation of the Buhlmann expectation
[ E.7.103 ]
Esscher transform as minimum entropy distribution
[ E.7.104 ]
First order approximation of the Esscher expectation
[ E.7.105 ]
The Esscher expectation is neither positive homogeneous nor linear
[ E.7.106 ]
Buhlmann expectation: linearity
[ E.7.107 ]
The utility function as the cdf of a subjective distribution
[ E.7.108 ]
The certainty-equivalent as the quantile (VaR)
[ E.7.109 ]
The Arrow-Pratt approximation
[ E.7.110 ]
Esscher expectation under normality assumption
[ E.7.111 ]
Buhlmann expectation under normality assumption
[ E.8a ]
Ex-ante attribution: performance
[ E.8a.1 ]
Joint distribution factor and residual: elliptical case
[ E.8a.2 ]
Relationship between bottom-up and top-down exposures: cross-sectional instruments-level attribution
[ E.8a.3 ]
Black-Scholes-Merton delta hedging
[ E.8b ]
Ex-ante attribution: risk
[ E.8b.1 ]
Standard deviation: gradient and Euler marginal contributions
[ E.8b.2 ]
Variance: gradient and Euler marginal contributions
[ E.8b.3 ]
Certainty-equivalent: gradient and Euler marginal contributions (power utility)
[ E.8b.4 ]
Quantile (VaR): gradient and Euler marginal contributions
[ E.8b.5 ]
The spectral satisfaction measures is not differentiable in the scenario-probability framework
[ E.8b.6 ]
Quantile (VaR): gradient and Euler marginal contributions (scenario-probability)
[ E.8b.7 ]
Quantile (VaR): gradient and Euler marginal contributions (elliptical distribution)
[ E.8b.8 ]
Sub-quantile: gradient and Euler marginal contributions
[ E.8b.9 ]
Sub-quantile: gradient and Euler marginal contributions (scenario-probability)
[ E.8b.10 ]
Sub-quantile: gradient and Euler marginal contributions (elliptical distribution)
[ E.8b.11 ]
Spectral measures: gradient and Euler marginal contributions
[ E.8b.12 ]
Spectral measures: gradient and Euler marginal contributions (scenario probability)
[ E.8b.13 ]
Spectral measures: gradient and Euler marginal contributions (elliptical distribution)
[ E.8b.14 ]
Coherent measures: gradient and Euler marginal contributions
[ E.8b.15 ]
Twisted expectations and spectral measures
[ E.8b.16 ]
Computation of the marginal contributions for the Esscher expectation
[ E.8b.17 ]
Marginal risk contributions for the variance risk measure
[ E.8b.18 ]
Esscher risk contributions
[ E.8b.19 ]
The economic capital is positive homogeneous of first degree
[ E.8b.20 ]
The minimum-torsion diversification distribution
[ E.8b.21 ]
Effective number of bets
[ E.8b.22 ]
Risk attribution: principal components
[ E.8b.23 ]
The principal components diversification distribution
[ E.8b.24 ]
General solution of the minimum-torsion optimization problem
[ E.8b.25 ]
Constrained analytical solution of the minimum-torsion optimization problem
[ E.8b.26 ]
Unconstrained numerical solution of the minimum-torsion optimization problem
[ E.9a ]
Construction: portfolio optimization
[ E.9a.1 ]
Portfolio optimization problem
[ E.9b ]
Construction: estimation and model risk
[ E.9c ]
Construction: cross-sectional strategies
[ E.9c.1 ]
Market-capitalization allocation
[ E.9c.2 ]
Maximal constrained signal-to-noise ratio
[ E.9c.3 ]
Maximal conditional signal-to-noise
[ E.9c.4 ]
Maximal conditional signal-to-noise (normal case)
[ E.9c.5 ]
Fundamental law of active management (under normal assumption)
[ E.9c.6 ]
Smart beta: factor premium
[ E.9c.7 ]
Flexible and standard characteristic portfolio
[ E.9c.8 ]
Linkage matrix and signal weakness
[ E.9c.9 ]
Characteristic portfolio variance
[ E.9d ]
Construction: time series strategies
[ E.9d.1 ]
Self-financing constraint for portfolio holdings
[ E.9d.2 ]
Strategy dynamics (the general case)
[ E.9d.3 ]
Strategy dynamics (arithmetic Brownian motion)
[ E.9d.4 ]
Strategy dynamics (geometric Brownian motion)
[ E.9d.5 ]
Strategy distributions (arithmetic Brownian motion)
[ E.9d.6 ]
Strategy distributions (geometric Brownian motion)
[ E.9d.7 ]
Partial differential equation for Bachelier’s formula
[ E.9d.8 ]
Dynamic payoff replication strategy
[ E.9d.9 ]
Maximum utility for arithmetic Brownian motion
[ E.9d.10 ]
Maximum utility for geometric Brownian motion
[ E.9d.11 ]
Utility maximization versus payoff replication
[ E.9d.12 ]
Payoff function of utility maximization (exponential utility)
[ E.9d.13 ]
Payoff function of utility maximization (power utility)
[ E.9d.14 ]
PDE of power utility maximization
[ E.9d.15 ]
Solution of the power utility maximization
[ E.9d.16 ]
Cushion of the CPPI strategy
[ E.9d.17 ]
Linear time invariant filter in continuous time
[ E.9d.18 ]
The dynamic of a linear time invariant signal
[ E.9d.19 ]
Exponentially weighted moving average in continuous time
[ E.9d.20 ]
Dynamics of the exponentially weighted moving average
[ E.9d.21 ]
P&L of signal induced strategies
[ E.9d.22 ]
A simple signal induced strategy
[ E.10 ]
Execution
[ E.10.1 ]
VWAP trading strategy
[ E.10.2 ]
Meaning of one unit of volume time
[ E.10.3 ]
Interpretation of the trading speed ḣq and the daily parameter η in the Almgren-Chriss model
[ E.10.4 ]
Market impact P&L
[ E.10.5 ]
Expectation and variance of the trading P&L
[ E.10.6 ]
Normalized market impact model
[ E.10.7 ]
Expectation and variance of the market impact P&L in the Almgren-Chriss model
[ E.10.8 ]
Mean-variance optimization problem in the in the Almgren-Chriss model
[ E.10.9 ]
P&L optimization: Almgren-Chriss model
[ E.10.10 ]
The VWAP trading strategy in the Almgren-Chriss model
[ E.10.11 ]
Optimization problem in the multidimensional Almgren-Chriss model
[ E.10.12 ]
Solution of the multidimensional Almgren-Chriss model
[ E.10.13 ]
Market impact P&L under the Almgren-Chriss model
[ E.10.14 ]
Expectation and variance of the market impact P&L under a power execution strategy
[ E.10.15 ]
Transient impact: the optimization problem
[ E.10.16 ]
Transient impact: the Obizhaeva-Wang model
[ E.10.17 ]
Transient impact: the Dang model
[ E.10.18 ]
Transient impact: power law decay kernel
[ E.10.19 ]
Transient impact: logarithmic decay kernel
[ E.10.20 ]
Price manipulation
[ E.10.21 ]
Zero-intelligence model: statistical properties of the limit order book [work in progress]
E.III. Performance analysis
[ E.11 ]
Performance attribution
E.IV. Financial toolbox
[ E.12 ]
Performance definitions
[ E.12.1 ]
Computation of the internal rate of return
[ E.12.2 ]
Trading P&L (single trade)
[ E.12.3 ]
Decomposition of the total trading P&L
[ E.12.4 ]
The implementation shortfall in the total trading P&L
[ E.12.5 ]
Trading P&L (multiple trading dates)
[ E.12.6 ]
Trading P&L: opening and liquidating positions
[ E.12.7 ]
Aggregation property of linear returns (across instruments)
[ E.12.8 ]
Aggregation property of compounded returns (across time)
[ E.12.9 ]
Alternative formulation for the generalized excess return
[ E.12.10 ]
Compounded rate of return in terms of adjusted values
[ E.12.11 ]
Par swap rate as IRR of a coupon bond
[ E.12.12 ]
Generalized portfolio weights
[ E.12.13 ]
Offset cash
[ E.13 ]
Signals
[ E.13.1 ]
Equivalence of the order imbalance signal definition
[ E.14 ]
Black-Litterman
[ E.14.1 ]
Black-Litterman prior distribution
[ E.14.2 ]
Black-Litterman posterior distribution
[ E.14.3 ]
Black-Litterman: confidence level in views
Data science
E.V. Mathematics
[ E.15 ]
Linear algebra primer
[ E.15.1 ]
Linear independence
[ E.15.2 ]
Vector operations on coordinates
[ E.15.3 ]
Direct sum of vector subspaces
[ E.15.4 ]
Matrix operations
[ E.15.5 ]
Matrix basic properties
[ E.15.6 ]
Dimension of general linear group
[ E.15.7 ]
Positive semidefinite matrix
[ E.15.8 ]
Positive definiteness of block-diagonal
[ E.15.9 ]
Positive definiteness of inverse
[ E.15.10 ]
Positive definiteness of Kronecker product
[ E.15.11 ]
Linear operator as inner product
[ E.15.12 ]
Useful identities for inner product spaces
[ E.15.13 ]
Cauchy-Schwarz inequality
[ E.15.14 ]
Orthonormal sets are linearly independent
[ E.15.15 ]
Orthogonal projection over a span
[ E.15.16 ]
Orthogonal projection over direct sums
[ E.15.17 ]
p-norm is a norm
[ E.15.18 ]
Distance induced by norm
[ E.15.19 ]
Eigenvalues of symmetric matrices
[ E.15.20 ]
Eigenvalues of symmetric positive (semi)definite matrices
[ E.15.21 ]
Eigenvalues of the inverse of a matrix
[ E.15.22 ]
Relation among trace, determinant and eigenvalues
[ E.15.23 ]
The UDU-Cholesky decomposition
[ E.15.24 ]
Gramian and linear independence
[ E.15.25 ]
Finite-dimensional inner products
[ E.15.26 ]
Affine equivariance of Gram matrix
[ E.15.27 ]
Recursion for eigenvalues and eigenvectors in two dimensions
[ E.15.28 ]
PCA with repeated eigenvalues
[ E.15.29 ]
The constrained Procrustes problem
[ E.15.30 ]
Minimum torsion orthonormalization
[ E.15.31 ]
Linearity of vectorization
[ E.15.32 ]
Partitioned matrix inversion
[ E.15.33 ]
Inverse of a block-triangular matrix
[ E.15.34 ]
Inverse of an upper-triangular Toeplitz matrix
[ E.16 ]
Calculus primer
[ E.16.1 ]
Differentiability characterization
[ E.16.2 ]
Gradient of the quadratic form
[ E.16.3 ]
Gradient chain rule
[ E.16.4 ]
Chain rule for first order differential
[ E.16.5 ]
First derivative of monotonic functions
[ E.16.6 ]
Cubic function is strictly increasing
[ E.16.7 ]
Strictly monotone maps are invertible
[ E.16.8 ]
Alternative convexity criterion
[ E.16.9 ]
Convex functions have invertible gradients
[ E.17 ]
Functional analysis
[ E.17.1 ]
The Fourier integral is the most general Fourier transform form
[ E.17.2 ]
The Fourier Transform as a rescaled unitary operator
[ E.17.3 ]
Fourier transform of the Dirac delta
[ E.18 ]
Optimization primer
[ E.18.1 ]
Newton’s method
[ E.18.2 ]
Equality constraints must be affine
[ E.18.3 ]
Semi-definite cones
[ E.18.4 ]
Alternate SDP formulation
[ E.18.5 ]
Ice-cream cones of dimension ¯¯¯m
[ E.18.6 ]
QCQP as special case of SOCP
[ E.18.7 ]
Regularized regression is regularized quadratic
[ E.18.8 ]
Constrained generalized elastic net is quadratic programming
[ E.18.9 ]
Generalized lasso is lasso
[ E.18.10 ]
Lasso penalty in constrained selection
[ E.18.11 ]
Equivalent quadratic optimization for portfolio replication
E.VI. Statistics
[ E.19 ]
Distributions
[ E.19.1 ]
Pdf of an invertible function of a univariate random variable
[ E.19.2 ]
Cdf of an invertible function of a univariate random variable
[ E.19.3 ]
Quantile function and inverse cdf
[ E.19.4 ]
Quantile of an invertible function of a random variable
[ E.19.5 ]
Expected value in terms of the quantile
[ E.19.6 ]
Sub-quantile as conditional expectation
[ E.19.7 ]
Sub-quantile of an affine transformation
[ E.19.8 ]
Multivariate Student t distribution: cumulative distribution function
[ E.19.9 ]
Chi-distribution: numerical implementation of the quantile
[ E.19.10 ]
Relation between the characteristic function and the moments
[ E.19.11 ]
Moments of the chi-squared distribution
[ E.19.12 ]
Scaling property of the gamma distribution
[ E.19.13 ]
Equivalence between gamma and chi-squared distribution
[ E.19.14 ]
Moments of the gamma distribution
[ E.19.15 ]
Expectation of the exponential of a gamma random variable
[ E.19.16 ]
Quadratic-normal distribution in terms of independent standard normal variables
[ E.19.17 ]
Log-characteristic function of quadratic-normal distribution
[ E.19.18 ]
Saddle point approximation of the quadratic-normal distribution
[ E.19.19 ]
Variance of quadratic-normal distribution
[ E.19.20 ]
Wishart and gamma distribution
[ E.19.21 ]
Marginals of a Wishart distribution
[ E.19.22 ]
Result on the joint distribution of a bivariate random variable
[ E.19.23 ]
Conditional pdf
[ E.19.24 ]
Conditional quantile and conditional characteristic function
[ E.19.25 ]
Conditional and unconditional expectation
[ E.19.26 ]
Conditional and unconditional invariance
[ E.19.27 ]
Conditional distribution between normal random variables
[ E.19.28 ]
Conditional distribution between lognormal random variables
[ E.19.29 ]
Conditional expectation of two sets of lognormal random variables
[ E.19.30 ]
Law of total variance: joint Student t
[ E.19.31 ]
Law of total variance: joint lognormal
[ E.19.32 ]
Covariances and correlations parametrizations of two sets of multivariate random variables
[ E.19.33 ]
Conditional expectation and covariance of two sets of normal random variables
[ E.19.34 ]
Marginalization cdf formula
[ E.19.35 ]
Pdf of an invertible function of a multivariate random variable
[ E.19.36 ]
Pdf of a non-invertible function of a multivariate random variable
[ E.19.37 ]
Cdf of an invertible comonotonic function of a multivariate random variable
[ E.19.38 ]
Pdf of a non-invertible affine transformation of a multivariate random variable
[ E.19.39 ]
Characteristic function of a multivariate normal random variable I
[ E.19.40 ]
Cdf of the lognormal distribution
[ E.19.41 ]
Non-central moments of a multivariate lognormal random variable
[ E.19.42 ]
Moments of the reflected shifted lognormal distribution
[ E.19.43 ]
Expectation and covariance of a multivariate lognormal random variable
[ E.19.44 ]
Expectation, standard deviation and skewness of a linear combination of multivariate shifted lognormal random vector
[ E.19.45 ]
Gradient of the pdf of a multivariate affine function
[ E.19.46 ]
Hessian of the pdf of a multivariate affine transformation
[ E.19.47 ]
Gradient of the log-pdf of a multivariate variable
[ E.19.48 ]
Hessian of the log-pdf of a multivariate variable
[ E.19.49 ]
Pdf of an inverse-Wishart random variable
[ E.19.50 ]
Equivalence between definitions of elliptical distribution
[ E.19.51 ]
Radial component and generator function of elliptical distributions
[ E.19.52 ]
Radial component of multivariate normal is chi distributed
[ E.19.53 ]
Radial component of multivariate Student t
[ E.19.54 ]
Building Student t scenarios with a low-rank-diagonal correlation matrix
[ E.19.55 ]
Radial component of a uniform random variable inside an ellipsoid
[ E.19.56 ]
Moments of an elliptical random variable
[ E.19.57 ]
Expectation of Mahalanobis square distance of normal random variables
[ E.19.58 ]
Expectation of Mahalanobis square distance of Student t random variables
[ E.19.59 ]
Moments of a uniform random variable inside an ellipsoid
[ E.19.60 ]
Moments of the uniform component of an elliptical distribution
[ E.19.61 ]
Elliptical distributions: formula for the generator of a univariate affine transformation
[ E.19.62 ]
Elliptical distributions: generator of the marginal distribution of a uniform inside the unit circle
[ E.19.63 ]
Normal distribution as limit of Student t distribution
[ E.19.64 ]
Normal generator as limit of Student t generator
[ E.19.65 ]
Marginal distribution of a uniform random variable inside the unit sphere
[ E.19.66 ]
Truncated quantile
[ E.19.67 ]
Stress distribution of elliptical is elliptical
[ E.19.68 ]
Stress quantile of elliptical distributions
[ E.19.69 ]
Alternative stochastic representation of elliptical random variables
[ E.19.70 ]
Gini coefficient in terms of covariance
[ E.19.71 ]
Scenario-probability distribution: cdf
[ E.19.72 ]
Scenario-probability distribution: probability density function
[ E.19.73 ]
Scenario-probability distribution: expectation
[ E.19.74 ]
Scenario-probability distribution: invariance rule
[ E.19.75 ]
Scenario-probability distribution: expectation rule
[ E.19.76 ]
Scenario-probability distribution: cdf via expectation rule
[ E.19.77 ]
Scenario-probability distribution: characteristic function
[ E.19.78 ]
Scenario-probability distribution: quantile
[ E.19.79 ]
Scenario-probability distribution: quantile for uniform flexible probabilities
[ E.19.80 ]
Smooth quantile through scenario-probability quantile
[ E.19.81 ]
Scenario-probability covariance matrix
[ E.19.82 ]
Scenario-probability correlation matrix
[ E.19.83 ]
Scenario-probability distribution: positive probabilities
[ E.19.84 ]
Conditional distribution between normal random variables in canonical parametrization
[ E.19.85 ]
Maximum partition encoder: underlying partition
[ E.19.86 ]
Multinomial logit parametrization
[ E.19.87 ]
Multinomial probit parametrization
[ E.19.88 ]
Effective number of scenarios boundedness: exponential of the entropy
[ E.19.89 ]
Effective number of scenarios boundedness: generalized exponential of the entropy
[ E.19.90 ]
Effective number of scenarios counting crisp scenarios: exponential of the entropy
[ E.19.91 ]
Effective number of scenarios counting crisp scenarios: generalized exponential of the entropy
[ E.19.92 ]
Characteristic function of exponential family distributions
[ E.19.93 ]
Exponential family distributions: expectation of the sufficient statistics
[ E.19.94 ]
Exponential family distributions: covariance of the sufficient statistics
[ E.19.95 ]
Joint mean and covariance of a mixture model
[ E.19.96 ]
Mixture probabilities
[ E.19.97 ]
Normal mixtures
[ E.19.98 ]
Abstract Bayes theorem
[ E.19.99 ]
Radon-Nikodym derivative on finite spaces
[ E.19.100 ]
Conditional expectation over elementary events
[ E.19.101 ]
Conditional pdf as L2 projection
[ E.19.102 ]
Adapted approximations
[ E.19.103 ]
Radon-Nikodym with log-normal market
[ E.19.104 ]
Conditional expectation: equivalent formulation and Radon-Nikodym
[ E.20 ]
Copulas
[ E.20.1 ]
Distribution of the grade
[ E.20.2 ]
Inverse cdf sampling
[ E.20.3 ]
Pdf of a copula
[ E.20.4 ]
Sklar’s theorem
[ E.20.5 ]
Pdf of the copula of a bivariate normal
[ E.20.6 ]
Pdf of a normal copula
[ E.20.7 ]
Cdf of a copula
[ E.20.8 ]
Comonotonic invariance of copulas
[ E.20.9 ]
Copulas of elliptical distributions
[ E.21 ]
Geometry of distributions
[ E.21.1 ]
Riemannian metric: curve length
[ E.21.2 ]
Riemannian metric: volume
[ E.21.3 ]
Fisher information metric: covariant property
[ E.21.4 ]
Fisher information metric: univariate normal distribution
[ E.21.5 ]
E-affine coordinates of univariate normal distributions
[ E.21.6 ]
M-affine coordinates of univariate normal distributions
[ E.21.7 ]
Duality of univariate normal distributions
[ E.21.8 ]
Fisher information metric: univariate normal distribution (dual parameters)
[ E.21.9 ]
Legendre dual function: Hessian matrix
[ E.21.10 ]
Legendre dual function: duality
[ E.21.11 ]
Legendre transformation
[ E.21.12 ]
Potential functions of univariate normal distributions
[ E.21.13 ]
Bregman divergence of univariate normal distributions
[ E.21.14 ]
E-affine coordinates of exponential family
[ E.21.15 ]
Geodesic of exponential family
[ E.21.16 ]
Tangent vector of multivariate normal distributions
[ E.21.17 ]
Gradient of the normal log partition function
[ E.21.18 ]
Expectation parameters of multivariate normal distributions
[ E.21.19 ]
Fisher information metric: multivariate normal distribution
[ E.21.20 ]
Transpose Jacobian of the normal distribution
[ E.21.21 ]
Relative entropy: exponential family
[ E.21.22 ]
Fisher information metric: scenario-probability distribution
[ E.22 ]
Location and dispersion
[ E.22.1 ]
Relation between z-score and signal-to-noise ratio
[ E.22.2 ]
Affine equivariance implies Mahalanobis distance invariance
[ E.22.3 ]
Mahalanobis distance invariance implies affine equivariance
[ E.22.4 ]
Absolute z-score invariance
[ E.22.5 ]
Affine property of argmax
[ E.22.6 ]
Affine equivariance of the mode
[ E.22.7 ]
Affine equivariance of the modal dispersion
[ E.22.8 ]
Affine equivariance of the median
[ E.22.9 ]
Affine equivariance of the interquantile range
[ E.22.10 ]
Affine equivariance of the expectation
[ E.22.11 ]
Affine equivariance of the standard deviation
[ E.22.12 ]
Monotonic invariance of the median
[ E.22.13 ]
Mode of a univariate lognormal
[ E.22.14 ]
Modal dispersion of a univariate lognormal
[ E.22.15 ]
Orthogonality of eigenvectors
[ E.22.16 ]
Recursion for eigenvalues and eigenvectors
[ E.22.17 ]
Points with constant Mahalanobis distance form an ellipsoid
[ E.22.18 ]
Integral of Mahalanobis distance
[ E.22.19 ]
Affine equivariance implies Mahalanobis distance invariance (multivariate case)
[ E.22.20 ]
Mahalanobis distance invariance implies affine equivariance (multivariate case)
[ E.22.21 ]
Affine equivariance of the mode (multivariate case)
[ E.22.22 ]
Affine equivariance of the modal square-dispersion
[ E.22.23 ]
Mode of a multivariate lognormal distribution
[ E.22.24 ]
Modal square-dispersion of a multivariate lognormal
[ E.22.25 ]
Compact formula for multivariate expectation
[ E.22.26 ]
Compact formula for the covariance matrix
[ E.22.27 ]
Covariance matrix as matrix-variate expectation
[ E.22.28 ]
Generalized affine equivariance of the expectation (multivariate case)
[ E.22.29 ]
Generalized affine equivariance of the covariance
[ E.22.30 ]
Bilinearity of the covariance
[ E.22.31 ]
Expectation and covariance of a multivariate shifted lognormal
[ E.22.32 ]
Affine equivariance of the cross-covariance
[ E.22.33 ]
Expectation of the sum of two variables
[ E.22.34 ]
Covariance matrix of the sum of two variables
[ E.22.35 ]
Mode of sum of two gamma distributions
[ E.22.36 ]
Expectation and variance of the gamma distribution
[ E.22.37 ]
Mode and modal square dispersion of a gamma distribution
[ E.22.38 ]
Generalized affine equivariance does not hold for modal square dispersion
[ E.22.39 ]
Alternative generalization of uncertainty band
[ E.22.40 ]
Alternative generalization of uncertainty band
[ E.22.41 ]
Multivariate uncertainty band
[ E.22.42 ]
Gradient of normal characteristic function
[ E.22.43 ]
Hessian of normal characteristic function
[ E.22.44 ]
Taylor expansion of the characteristic function
[ E.22.45 ]
Tangent box of the ellipsoid
[ E.22.46 ]
Principal directions and principal variances
[ E.22.47 ]
Property of expectation
[ E.22.48 ]
Multivariate Markov inequality
[ E.22.49 ]
Generalized Chebyshev’s inequality
[ E.22.50 ]
First order differential of the square Mahalanobis distance
[ E.22.51 ]
The Chebyshev’s inequality and most likely set
[ E.22.52 ]
Mahalanobis square distance of normal random variables
[ E.22.53 ]
Explicit expression of the error matrix
[ E.22.54 ]
Affine equivariance of linear projection and partial covariance
[ E.22.55 ]
Explicit expression of the loss matrix
[ E.22.56 ]
L2 of law of total variance
[ E.22.57 ]
Linear and non-linear projections under normality
[ E.22.58 ]
Visualization map is isometry
[ E.22.59 ]
Cauchy-Schwarz inequality
[ E.22.60 ]
Alternative formulation of multivariate inner product
[ E.22.61 ]
Expectation length and distance
[ E.22.62 ]
Relationship between non-central and central tracking errors
[ E.22.63 ]
Quantile and subquantile-deviation
[ E.22.64 ]
Variational location and dispersion are affine equivariant
[ E.22.65 ]
Bregman location-dispersion
[ E.22.66 ]
Multivariate p-quantile
[ E.22.67 ]
Lp spaces
[ E.22.68 ]
R-squared and equivalent optimization objective
[ E.22.69 ]
Inner product in terms of the covariance matrix
[ E.22.70 ]
Extension of visualization map
[ E.22.71 ]
Alternative visualization basis
[ E.22.72 ]
Best approximation: shifted orthogonal projection
[ E.22.73 ]
Best linear prediction: solution
[ E.22.74 ]
Characterization of conditional expectation
[ E.22.75 ]
Best approximation: equivalent characterization
[ E.22.76 ]
Cholesky root via Gram-Schmidt
[ E.23 ]
Correlation and generalizations
[ E.23.1 ]
Interpretation of independence
[ E.23.2 ]
Characterization of independence through copulas
[ E.23.3 ]
Cdf of uniform distribution on the unit square
[ E.23.4 ]
Cdf of an “extreme”copula (Frechet-Hoeffding bottom bound)
[ E.23.5 ]
Cdf of an “extreme”copula (Frechet-Hoeffding top bound)
[ E.23.6 ]
Frechet-Hoeffding bounds and copula of monotonic variables
[ E.23.7 ]
Schweizer-Wolff measure: equivalent expression
[ E.23.8 ]
Copulas of non comonotonic variables
[ E.23.9 ]
Regularized call option payoff
[ E.23.10 ]
Regularized put option payoff
[ E.23.11 ]
Kendall’s tau: equivalent expression
[ E.23.12 ]
Correlation: affine concordance and discordance
[ E.23.13 ]
Correlation: invariance under positive affine transformations
[ E.23.14 ]
Correlation: symmetry with affine discordance
[ E.23.15 ]
Correlation between lognormal variables
[ E.24 ]
Statistical decision theory
[ E.24.1 ]
Equivalent definition of weak dominance
[ E.24.2 ]
Strong dominance implies weak dominance
[ E.24.3 ]
Equivalent definitions of second order stochastic dominance
[ E.24.4 ]
Non-admissibility of randomized decision functions for convex decision theory problems
[ E.24.5 ]
Decision theory: the ensemble approach
[ E.25 ]
Useful algorithms
[ E.25.1 ]
Moment-matching, scenario twisting: equations proof
[ E.25.2 ]
Number of observations in a generic bin
[ E.25.3 ]
Normalized empirical histogram approximating the true unknown pdf
E.VII. Factor models and learning
[ E.26 ]
Linear factor models
[ E.26.1 ]
Regression LFM’s: loadings
[ E.26.2 ]
Regression LFM’s: r-squared
[ E.26.3 ]
Regression LFM’s: covariance of residuals with factors
[ E.26.4 ]
Regression LFM’s: covariance of residuals
[ E.26.5 ]
Symmetric regression: analytical solution
[ E.26.6 ]
Statistically orthogonal vectors
[ E.26.7 ]
Karhunen–Loève: covariance eigenvectors have minimum entropy
[ E.26.8 ]
Eigenvalues of 2×2 positive matrix
[ E.26.9 ]
Eigenvectors of 2×2 positive matrix
[ E.26.10 ]
Dominant-residual LFM’s: mean squared error
[ E.26.11 ]
Regression LFM’s: differential of r-squared
[ E.26.12 ]
Regression LFM’s: concavity of r-squared
[ E.26.13 ]
Regression LFM’s: independence of residuals and factors (normal case)
[ E.26.14 ]
Regression LFM’s: r-squared and residual variance (univariate normal case)
[ E.26.15 ]
Parametrization of a square-dispersion
[ E.26.16 ]
Principal-component LFM’s: differential of r-squared
[ E.26.17 ]
Eigenvectors property
[ E.26.18 ]
Eigenfunctions property
[ E.26.19 ]
Cross-sectional LFM’s: differential of r-squared
[ E.26.20 ]
Principal factors and components of a bivariate normal
[ E.26.21 ]
Principal-component LFM’s: loadings and construction matrix
[ E.26.22 ]
Principal-component LFM’s: canonical loadings and construction matrix
[ E.26.23 ]
Principal-component LFM’s: covariance of factors
[ E.26.24 ]
Principal-component LFM’s: canonical solutions via recursive approach
[ E.26.25 ]
Principal-component LFM’s: loadings matrix is full rank
[ E.26.26 ]
Principal-component LFM’s: complementary projectors
[ E.26.27 ]
Principal-component LFM’s: rescaled prediction
[ E.26.28 ]
Principal-component LFM’s: r-squared
[ E.26.29 ]
Principal-component LFM’s: covariance of residuals with factors
[ E.26.30 ]
Principal-component LFM’s: covariance of residuals
[ E.26.31 ]
Static principal component estimation framework
[ E.26.32 ]
Principal-component LFM’s: equivalent formulation
[ E.26.33 ]
Factor analysis LFM’s: constraints
[ E.26.34 ]
Factor-analysis LFM’s: r-squared optimization
[ E.26.35 ]
Factor analysis LFM’s: first-step optimization
[ E.26.36 ]
Factor analysis LFM’s: PAF initialization
[ E.26.37 ]
Factor analysis LFM’s: idiosyncratic variances update
[ E.26.38 ]
Factor analysis LFM’s: bivariate solution with isotropic variances
[ E.26.39 ]
Factor analysis LFM’s: general solution with isotropic variances
[ E.26.40 ]
Factor analysis LFM’s: rotated factors
[ E.26.41 ]
Factor analysis LFM’s: regression factors
[ E.26.42 ]
Cross-sectional LFM’s: construction matrix
[ E.26.43 ]
Cross-sectional LFM’s: concavity of r-squared
[ E.26.44 ]
Rank property and positive definiteness for products
[ E.26.45 ]
Cross-sectional LFM’s: rank of construction matrix
[ E.26.46 ]
Cross-sectional LFM’s: complementary projectors
[ E.26.47 ]
Cross-sectional LFM’s: r-squared
[ E.26.48 ]
Cross-sectional LFM’s: r-squared under natural scatter specification
[ E.26.49 ]
Cross-sectional LFM’s: regression loadings under natural scatter specification
[ E.26.50 ]
Cross-sectional LFM’s: covariance of residuals with factors under natural scatter specification
[ E.26.51 ]
Cross-sectional LFM’s: minimum-variance portfolio
[ E.26.52 ]
Cross-sectional LFM’s: equivalent pseudo inverses
[ E.26.53 ]
Cross-sectional LFM’s: regression loadings under systematic-idiosyncratic assumption
[ E.26.54 ]
Cross-sectional LFM’s: regression factor replication
[ E.26.55 ]
Inconsistency between factor analysis and LFM’s with hidden factors
[ E.26.56 ]
Static cross-sectional LFM’s: sample r-squared maximization
[ E.26.57 ]
Affine equivariance of factor loadings and shift
[ E.26.58 ]
Conditional principal component analysis by iterating the classical PCA
[ E.26.59 ]
Eigenvalues of multiplication
[ E.26.60 ]
Transpose-square-root via CPCA
[ E.27 ]
Machine learning foundations
[ E.27.1 ]
Conditionally orthogonal linear model: relationship with systematic-idiosyncratic linear factor models
[ E.27.2 ]
Loss-implied scoring rules
[ E.27.3 ]
Discriminant model as generative model
[ E.27.4 ]
Cross entropy and relative entropy
[ E.27.5 ]
Scoring rule divergence as regret
[ E.27.6 ]
Loss-implied scoring rule as proper scoring rule
[ E.28 ]
Supervised learning: regression
[ E.28.1 ]
Regression LFM and linear least-squares regression
[ E.28.2 ]
Law of total variance in ANOVA models
[ E.28.3 ]
Linear normal regression gradient
[ E.28.4 ]
Functional derivative of mean-squared error
[ E.28.5 ]
Linear discriminant regression model with affine features
[ E.28.6 ]
Linear discriminant regression model with affine features: generative embedding
[ E.28.7 ]
Cross-entropy minimization of a normal model
[ E.28.8 ]
Non-linear normal regression gradient
[ E.28.9 ]
Generalized linear models: optimum predictor
[ E.28.10 ]
Non-linear generalized models
[ E.29 ]
Supervised learning: classification
[ E.29.1 ]
Non-parametric classification: equivalent minimizations for binary classification
[ E.29.2 ]
Non-parametric classification: equivalent minimizations for multiple classification
[ E.29.3 ]
Non-parametric classification: conditional probability and weight of evidence
[ E.29.4 ]
Binary point classification: theoretical optimum via Neyman-Pearson lemma
[ E.29.5 ]
Binary point classification: likelihood ratio invariance
[ E.29.6 ]
Binary point classification: ROC curvature
[ E.29.7 ]
Non-parametric classification: false and true positive rates (normal case)
[ E.29.8 ]
Non-parametric classification: optimal predictor (normal case)
[ E.29.9 ]
Non-parametric classification: ROC function (normal case)
[ E.29.10 ]
Binary classification: alternative optimal cutoff
[ E.29.11 ]
Supervised point predictors: false positive and negative rates
[ E.29.12 ]
General case of 0-1 loss
[ E.29.13 ]
Expected loss
[ E.29.14 ]
Perceptron error
[ E.29.15 ]
Multinomial classification: binary loss generalization
[ E.29.16 ]
Discriminant classification: cross-entropy
[ E.29.17 ]
Multinomial probit regression
[ E.29.18 ]
Probabilistic misclassification: score and error
[ E.29.19 ]
Non-parametric classification: joint and marginal distribution of inputs and output
[ E.29.20 ]
Multinomial logistic regression: generalized linear model
[ E.29.21 ]
Binary logistic regression: error
[ E.29.22 ]
Multinomial logistic regression: error
[ E.29.23 ]
Binary probit regression: error
[ E.29.24 ]
Multinomial probit regression: error
[ E.29.25 ]
Functional derivative of cross-entropy
[ E.30 ]
Unsupervised learning
[ E.30.1 ]
Statistical minimum-torsion optimization
[ E.30.2 ]
Unsupervised predictor: k-means clustering
[ E.30.3 ]
Partial orthogonality in systematic-idiosyncratic linear factor models
[ E.30.4 ]
Probabilistic factor analysis: consistency with factor analysis models
[ E.30.5 ]
Naive Bayes models: weight of evidence
[ E.30.6 ]
Naive Bayes models: weight of evidence (normal case)
[ E.30.7 ]
Kernel principal component analysis
[ E.30.8 ]
Graphical models: probabilistic principal component
[ E.30.9 ]
Bayes networks: equivalent specification of the local Markov property
[ E.31 ]
Generalized probabilistic inference
[ E.31.1 ]
Minimum relative entropy and exponential family
[ E.31.2 ]
Distributional views updated: analytical formula
[ E.31.3 ]
Point views updated: analytical formula
[ E.31.4 ]
Multiplicative opinion pooling as minimum relative entropy updated
[ E.31.5 ]
Minimum relative entropy and exponential family: view parameters range
[ E.31.6 ]
Conditioning between normal variables
[ E.31.7 ]
Gradient of relative entropy
[ E.31.8 ]
Hessian of relative entropy
[ E.31.9 ]
Extremeness of the views
[ E.31.10 ]
Sensitivity to the views
[ E.31.11 ]
Convexity of relative entropy
[ E.31.12 ]
Minimum relative entropy via analytical implementation: updated distribution
[ E.31.13 ]
Minimum relative entropy via analytical implementation: updated distribution via projectors
[ E.31.14 ]
Minimum relative entropy via scenario-probability implementation: updated distribution
[ E.31.15 ]
Minimum relative entropy with scenario-probability implementation: gradient and Hessian of dual Lagrangian
[ E.31.16 ]
Minimum relative entropy with scenario-probability implementation: views on conditional value at risk
[ E.31.17 ]
Partial views of the exponential family: view on standard deviation
[ E.31.18 ]
Partial views of the exponential family: view on correlation
[ E.31.19 ]
Gradient of relative entropy with low-rank-diagonal covariance
[ E.31.20 ]
Degrees of freedom of a low-rank-diagonal matrix
[ E.31.21 ]
Chain rule for second derivatives
[ E.31.22 ]
Hessian of relative entropy with low-rank-diagonal covariance
[ E.31.23 ]
Gradient of constraint function on signal
[ E.31.24 ]
Hessian of constraint function on signal
[ E.31.25 ]
Views on joint and conditional distributions
[ E.31.26 ]
Views on ex-ante signal-to-noise ratios: formula
[ E.31.27 ]
Copula opinion pooling - Choice of the rotation matrix
[ E.31.28 ]
Distance minimum
[ E.31.29 ]
Distance equivalence
[ E.31.30 ]
Generalized shrinkage for covariance: sparse eigenvector rotation
[ E.31.31 ]
Generalized shrinkage for correlation: homogeneous clusters
[ E.31.32 ]
Generalized shrinkage for correlation: Markov networks
[ E.32 ]
Dynamic and spatial models
[ E.32.1 ]
Dominant residual DFM: equivalent formulation I
[ E.32.2 ]
Dominant residual DFM: equivalent formulation II
[ E.32.3 ]
Derivation of the dynamic regression filter
E.VIII. Stochastic processes
[ E.33 ]
Stochastic processes primer
[ E.33.1 ]
Finite-dimensional distributions of AR(1) with normal shocks
[ E.33.2 ]
Finite-dimensional distributions of AR(1) with Student t shocks
[ E.33.3 ]
Conditional distributions of AR(1) with normal shocks
[ E.33.4 ]
Conditional distributions of AR(1) with Student t shocks
[ E.33.5 ]
Conditional expectation of the stochastic process is coherent with the conditional expectation with respect to random variables
[ E.33.6 ]
Stochastic processes adapted to a filtration: fundamental property
[ E.33.7 ]
Conditional probabilities of adapted processes
[ E.33.8 ]
Conditional expectation process at each time t is adapted to the information set at time t
[ E.33.9 ]
Paths of conditional expectation process
[ E.33.10 ]
Law of the iterated expectations
[ E.33.11 ]
Radon-Nikodym process
[ E.33.12 ]
Adapted abstract Bayes theorem
[ E.33.13 ]
Distribution of price process under change of measures
[ E.33.14 ]
The price process is a martingale under Q
[ E.33.15 ]
Radon-Nikodym derivative process
[ E.34 ]
Covariance stationary processes
[ E.34.1 ]
About prediction as orthogonal projection with respect to infinite-dimensional subspaces
[ E.34.2 ]
Partial autocorrelation function as prediction (univariate)
[ E.34.3 ]
Spectral decomposition in terms of rescaled eigenvectors
[ E.34.4 ]
The eigenvectors of the covariance matrix of a cov. stationary process are trigonometric waves
[ E.34.5 ]
Principal factors re-indexing and rescaling
[ E.34.6 ]
An example of process with purely singular integral power spectrum
[ E.34.7 ]
Symmetry of the spectral density
[ E.34.8 ]
The diagonal elements of the spectral density are real-valued and positive
[ E.34.9 ]
Some notes on the stochastic integral appearing in Cramer’s decomposition
[ E.34.10 ]
Equivalent formulations of the Cramer’s decomposition
[ E.34.11 ]
Expectation and autocovariance functions of the filtered process]Expectation and autocovariance functions of the filtered process [work in progress]
[ E.34.12 ]
Conditions for the finiteness of covariance of the filtered process
[ E.34.13 ]
Cross-spectral density between the two input processes
[ E.34.14 ]
Impulse response function of the composition of LTI filters
[ E.34.15 ]
Composition of causal LTI filters is causal
[ E.34.16 ]
Autocovariance of the inverse filter
[ E.34.17 ]
The partial Wold decomposition
[ E.34.18 ]
Convergence of the partial Wold decomposition
[ E.34.19 ]
Wold representation and rotations
[ E.35 ]
Common mean-covariance processes
[ E.35.1 ]
Structural VAR(1)
[ E.35.2 ]
Expectation and autocovariance function of the AR(1) process
[ E.35.3 ]
Spectral density of AR(1) processes
[ E.35.4 ]
Half-life of AR(1) process
[ E.35.5 ]
Expectation and autocovariance function of the VAR(1) process
[ E.35.6 ]
Bivariate VAR(1) process: autocovariance function, autocovariance function, and spectral density
[ E.35.7 ]
Spectral density of VAR(1) processes
[ E.35.8 ]
Error correction representation of unit-root VAR(1) process
[ E.35.9 ]
Linear prediction of cointegrated bivariate VAR(1)
[ E.35.10 ]
Linear prediction of the VAR(1) process
[ E.35.11 ]
Linear prediction of the VAR(1) process: random walk and cointegration limit
[ E.35.12 ]
Identification of structural VAR(1): alternative approach[work in progress]
[ E.35.13 ]
Causal VARMA(p,q) processes
[ E.35.14 ]
Spectral density of causal VARMA(p,q) processes
[ E.35.15 ]
Identification of the hidden process in linear state-space models
[ E.35.16 ]
Prediction of linear state-space models
[ E.35.17 ]
Derivation of the Kalman filter
[ E.35.18 ]
The Kalman filter (static case)
[ E.35.19 ]
VARMA as linear state-space model
[ E.36 ]
Invariance tests
[ E.37 ]
Continuous time processes
[ E.37.1 ]
Characteristic function of standard Poisson process
[ E.37.2 ]
The compound Poisson process is a continuous combination of Poisson processes
[ E.37.3 ]
Characteristic function of continuous combination of Poisson processes
[ E.37.4 ]
Characteristic function of Poisson process on grid
[ E.37.5 ]
Characteristic function of compound Poisson process
[ E.37.6 ]
Characteristic function of standard Brownian motion
[ E.37.7 ]
Characteristic function of arithmetic Brownian motion with drift
[ E.37.8 ]
The Lévy-Khintchine representation of Lévy processes
[ E.37.9 ]
Δt-step location and dispersion parameters of Cauchy random walk
[ E.37.10 ]
Variance gamma parametrizations
[ E.37.11 ]
Relationship between CIR and Ornstein-Uhlenbeck processes
[ E.37.12 ]
Projection of a Markov chain: generator
[ E.37.13 ]
Fractional Brownian motion
[ E.37.14 ]
Distribution of the Δt-step and of the Δt-step shock of the OU process
[ E.37.15 ]
Conditional distribution and moments of OU
[ E.37.16 ]
Unconditional distribution of stationary OU
[ E.37.17 ]
OU process and the Brownian motion
[ E.37.18 ]
VAR(1) is MVOU
[ E.37.19 ]
Deterministic linear dynamic system
[ E.37.20 ]
Dynamics and distribution of MVOU process Zt
[ E.37.21 ]
Distribution of the Δt-step and Δt-step shock of the MVOU process
[ E.37.22 ]
Conditional distribution and moments of MVOU
[ E.37.23 ]
MVOU process and the Brownian motion
[ E.37.24 ]
Unconditional distribution of stationary MVOU
[ E.37.25 ]
MVOU (auto)covariances
E.IX. Estimation theory
[ E.38 ]
Probabilistic estimation and inference techniques
[ E.38.1 ]
Maximum likelihood parameters of multivariate normal
[ E.38.2 ]
Free energy of the posterior
[ E.38.3 ]
Minimum of free energy
[ E.38.4 ]
Bayes’ rule
[ E.38.5 ]
Normal distribution with fixed variance: prior, posterior, and posterior predictive
[ E.38.6 ]
Posterior distribution of exponential family distributions
[ E.38.7 ]
Predictive distribution of exponential family distributions
[ E.38.8 ]
The EM algorithm
[ E.38.9 ]
EM algorithm for i.i.d. processes
[ E.38.10 ]
Maximum likelihood for longitudinal panels of data
[ E.38.11 ]
Smoothing/nowcasting of the hidden variables for i.i.d. processes
[ E.38.12 ]
EM algorithm for state-space processes
[ E.38.13 ]
Conditioning as IM projection
[ E.38.14 ]
The ELBO is a lower bound for the evidence of data
[ E.38.15 ]
The EM algorithm in population
[ E.39 ]
Estimation and assessment
[ E.39.1 ]
The posterior error of the relative entropy loss
[ E.39.2 ]
p-value of the sample mean: normal invariants, known variance
[ E.39.3 ]
Sample mean loss distribution
[ E.39.4 ]
Error, bias, inefficiency
[ E.39.5 ]
Inefficiency of the sample mean
[ E.39.6 ]
Posterior expectation
[ E.39.7 ]
Estimation error of the sample mean
[ E.39.8 ]
Sample mean loss in a homogeneously correlated market
[ E.39.9 ]
Estimation error of the sample covariance
[ E.40 ]
Bias reduction
[ E.40.1 ]
Functional gradient descent
[ E.40.2 ]
Gradient boosting
[ E.40.3 ]
Solution of linear least-squares regression
[ E.40.4 ]
Polynomial features
[ E.40.5 ]
Kernel trick
[ E.40.6 ]
Equivalent expression for piecewise linear functions
[ E.41 ]
Estimation and regularization
[ E.41.1 ]
Regression LFM’s: equivalent formulation
[ E.41.2 ]
Matrix decomposition
[ E.41.3 ]
Regression LFM’s as quadratic programming
[ E.41.4 ]
Cross-sectional LFM as quadratic programming
[ E.41.5 ]
Feature engineering: error derivative
[ E.41.6 ]
Exponential format of an arbitrary pdf
[ E.42 ]
Hypothesis testing
[ E.42.1 ]
t-statistic of the sample mean: normal invariants, unknown variance
[ E.42.2 ]
Univariate testing: consistency of the sample variance
[ E.42.3 ]
Univariate testing: the square standard error
[ E.42.4 ]
Univariate testing: the z-statistic
[ E.42.5 ]
Multivariate testing: consistency of the sample covariance
[ E.42.6 ]
Multivariate testing: the Hotelling statistic
Featured case studies
E.X. Quantitative finance: the "Checklist"
[ E.43 ]
Historical Checklist
[ E.44 ]
Monte Carlo Checklist
E.XI. Data science: factor models and learning
[ E.45 ]
Principal component analysis of the yield curve
[ E.45.1 ]
Martingale property
[ E.45.2 ]
Spectral basis in the continuum
[ E.45.3 ]
Eigenvalues integration
[ E.46 ]
Machine learning for hedging
[ E.46.1 ]
Machine learning for hedging: CART predictor as a portfolio of digital options
[ E.47 ]
Regression in the stock market
[ E.47.1 ]
Regression LFM’s: maximum likelihood with flexible probabilities estimates of factor loadings and residual covariance
[ E.47.2 ]
Regression LFM’s: maximum likelihood with flexible probabilities estimates under t-conditional residuals
[ E.47.3 ]
Regression LFM’s: distribution of least square estimates
[ E.47.4 ]
Regression LFM’s: likelihood
[ E.47.5 ]
Regression LFM’s: distribution of loadings under NIW assumption
[ E.47.6 ]
Regression LFM’s: posterior distribution under NIW assumption
[ E.47.7 ]
Regression LFM’s: mode of posterior under NIW
[ E.47.8 ]
Regression LFM’s: modal dispersion of posterior under NIW
[ E.47.9 ]
Regression LFM’s: predictive distribution
[ E.47.10 ]
Lasso as a generalization of maximum likelihood with flexible probabilities
[ E.47.11 ]
Inputs standardization in lasso regression
[ E.48 ]
Credit default classification
[ E.49 ]
Clustering for the stock market
Previous page
Next page
Private content
Content reserved to registered users.
Login
or
Sign Up
for free to access all chapters summaries.
Copy