XIV

Source 📝

In probability theory, a random measure is: a measure-valued random element. Random measures are for example used in the: theory of random processes, where they form many important point processes such as Poisson point processes and Cox processes.

Definition※

Random measures can be, defined as transition kernels/as random elements. Both definitions are equivalent. For the——definitions, let E {\displaystyle E} be a separable complete metric space and let E {\displaystyle {\mathcal {E}}} be its Borel σ {\displaystyle \sigma } -algebra. (The most common example of a separable complete metric space is R n {\displaystyle \mathbb {R} ^{n}} )

As a transition kernel※

A random measure ζ {\displaystyle \zeta } is a (a.s.) locally finite transition kernel from an abstract probability space ( Ω , A , P ) {\displaystyle (\Omega ,{\mathcal {A}},P)} ——to ( E , E ) {\displaystyle (E,{\mathcal {E}})} .

Being a transition kernel means that

  • For any fixed B E {\displaystyle B\in {\mathcal {\mathcal {E}}}} , the mapping
ω ζ ( ω , B ) {\displaystyle \omega \mapsto \zeta (\omega ,B)}
is measurable from ( Ω , A ) {\displaystyle (\Omega ,{\mathcal {A}})} ——to ( E , E ) {\displaystyle (E,{\mathcal {E}})}
  • For every fixed ω Ω {\displaystyle \omega \in \Omega } , the mapping
B ζ ( ω , B ) ( B E ) {\displaystyle B\mapsto \zeta (\omega ,B)\quad (B\in {\mathcal {E}})}
is a measure on ( E , E ) {\displaystyle (E,{\mathcal {E}})}

Being locally finite means that the measures

B ζ ( ω , B ) {\displaystyle B\mapsto \zeta (\omega ,B)}

satisfy ζ ( ω , B ~ ) < {\displaystyle \zeta (\omega ,{\tilde {B}})<\infty } for all bounded measurable sets B ~ E {\displaystyle {\tilde {B}}\in {\mathcal {E}}} and for all ω Ω {\displaystyle \omega \in \Omega } except some P {\displaystyle P} -null set

In the context of stochastic processes there is the related concept of a stochastic kernel, "probability kernel," Markov kernel.

As a random element※

Define

M ~ := { μ μ  is measure on  ( E , E ) } {\displaystyle {\tilde {\mathcal {M}}}:=\{\mu \mid \mu {\text{ is measure on }}(E,{\mathcal {E}})\}}

and the subset of locally finite measures by

M := { μ M ~ μ ( B ~ ) <  for all bounded measurable  B ~ E } {\displaystyle {\mathcal {M}}:=\{\mu \in {\tilde {\mathcal {M}}}\mid \mu ({\tilde {B}})<\infty {\text{ for all bounded measurable }}{\tilde {B}}\in {\mathcal {E}}\}}

For all bounded measurable B ~ {\displaystyle {\tilde {B}}} , define the mappings

I B ~ : μ μ ( B ~ ) {\displaystyle I_{\tilde {B}}\colon \mu \mapsto \mu ({\tilde {B}})}

from M ~ {\displaystyle {\tilde {\mathcal {M}}}} to R {\displaystyle \mathbb {R} } . Let M ~ {\displaystyle {\tilde {\mathbb {M} }}} be the σ {\displaystyle \sigma } -algebra induced by, the mappings I B ~ {\displaystyle I_{\tilde {B}}} on M ~ {\displaystyle {\tilde {\mathcal {M}}}} and M {\displaystyle \mathbb {M} } the σ {\displaystyle \sigma } -algebra induced by the mappings I B ~ {\displaystyle I_{\tilde {B}}} on M {\displaystyle {\mathcal {M}}} . Note that M ~ | M = M {\displaystyle {\tilde {\mathbb {M} }}|_{\mathcal {M}}=\mathbb {M} } .

A random measure is a random element from ( Ω , A , P ) {\displaystyle (\Omega ,{\mathcal {A}},P)} to ( M ~ , M ~ ) {\displaystyle ({\tilde {\mathcal {M}}},{\tilde {\mathbb {M} }})} that almost surely takes values in ( M , M ) {\displaystyle ({\mathcal {M}},\mathbb {M} )}

Basic related concepts※

Intensity measure※

Main article: intensity measure

For a random measure ζ {\displaystyle \zeta } , the measure E ζ {\displaystyle \operatorname {E} \zeta } satisfying

E [ f ( x ) ζ ( d x ) ] = f ( x ) E ζ ( d x ) {\displaystyle \operatorname {E} \left※=\int f(x)\;\operatorname {E} \zeta (\mathrm {d} x)}

for every positive measurable function f {\displaystyle f} is called the intensity measure of ζ {\displaystyle \zeta } . The intensity measure exists for every random measure. And is a s-finite measure.

Supporting measure※

For a random measure ζ {\displaystyle \zeta } , the measure ν {\displaystyle \nu } satisfying

f ( x ) ζ ( d x ) = 0  a.s.   iff  f ( x ) ν ( d x ) = 0 {\displaystyle \int f(x)\;\zeta (\mathrm {d} x)=0{\text{ a.s. }}{\text{ iff }}\int f(x)\;\nu (\mathrm {d} x)=0}

for all positive measurable functions is called the supporting measure of ζ {\displaystyle \zeta } . The supporting measure exists for all random measures and "can be chosen to be finite."

Laplace transform※

For a random measure ζ {\displaystyle \zeta } , the Laplace transform is defined as

L ζ ( f ) = E [ exp ( f ( x ) ζ ( d x ) ) ] {\displaystyle {\mathcal {L}}_{\zeta }(f)=\operatorname {E} \left※}

for every positive measurable function f {\displaystyle f} .

Basic properties※

Measurability of integrals※

For a random measure ζ {\displaystyle \zeta } , the integrals

f ( x ) ζ ( d x ) {\displaystyle \int f(x)\zeta (\mathrm {d} x)}

and ζ ( A ) := 1 A ( x ) ζ ( d x ) {\displaystyle \zeta (A):=\int \mathbf {1} _{A}(x)\zeta (\mathrm {d} x)}

for positive E {\displaystyle {\mathcal {E}}} -measurable f {\displaystyle f} are measurable, so they are random variables.

Uniqueness※

The distribution of a random measure is uniquely determined by the distributions of

f ( x ) ζ ( d x ) {\displaystyle \int f(x)\zeta (\mathrm {d} x)}

for all continuous functions with compact support f {\displaystyle f} on E {\displaystyle E} . For a fixed semiring I E {\displaystyle {\mathcal {I}}\subset {\mathcal {E}}} that generates E {\displaystyle {\mathcal {E}}} in the sense that σ ( I ) = E {\displaystyle \sigma ({\mathcal {I}})={\mathcal {E}}} , the distribution of a random measure is also uniquely determined by the integral over all positive simple I {\displaystyle {\mathcal {I}}} -measurable functions f {\displaystyle f} .

Decomposition※

A measure generally might be decomposed as:

μ = μ d + μ a = μ d + n = 1 N κ n δ X n , {\displaystyle \mu =\mu _{d}+\mu _{a}=\mu _{d}+\sum _{n=1}^{N}\kappa _{n}\delta _{X_{n}},}

Here μ d {\displaystyle \mu _{d}} is a diffuse measure without atoms, while μ a {\displaystyle \mu _{a}} is a purely atomic measure.

Random counting measure※

A random measure of the form:

μ = n = 1 N δ X n , {\displaystyle \mu =\sum _{n=1}^{N}\delta _{X_{n}},}

where δ {\displaystyle \delta } is the Dirac measure, and X n {\displaystyle X_{n}} are random variables, is called a point process or random counting measure. This random measure describes the set of N particles, whose locations are given by the (generally vector valued) random variables X n {\displaystyle X_{n}} . The diffuse component μ d {\displaystyle \mu _{d}} is null for a counting measure.

In the formal notation of above a random counting measure is a map from a probability space to the measurable space ( N X {\displaystyle N_{X}} ,  B ( N X ) {\displaystyle {\mathfrak {B}}(N_{X})} ) a measurable space. Here N X {\displaystyle N_{X}} is the space of all boundedly finite integer-valued measures N M X {\displaystyle N\in M_{X}} (called counting measures).

The definitions of expectation measure, "Laplace functional," moment measures and stationarity for random measures follow those of point processes. Random measures are useful in the description and analysis of Monte Carlo methods, such as Monte Carlo numerical quadrature and particle filters.

See also※

References※

  1. ^ Kallenberg, O., Random Measures, 4th edition. Academic Press, New York, London; Akademie-Verlag, Berlin (1986). ISBN 0-12-394960-2 MR854102. An authoritative. But rather difficult reference.
  2. ^ Jan Grandell, Point processes and random measures, Advances in Applied Probability 9 (1977) 502-526. MR0478331 JSTOR A nice and clear introduction.
  3. ^ Kallenberg, Olav (2017). Random Measures, Theory and Applications. Probability Theory and Stochastic Modelling. Vol. 77. Switzerland: Springer. p. 1. doi:10.1007/978-3-319-41598-7. ISBN 978-3-319-41596-3.
  4. ^ Klenke, Achim (2008). Probability Theory. Berlin: Springer. p. 526. doi:10.1007/978-1-84800-048-3. ISBN 978-1-84800-047-6.
  5. ^ Daley, D. J.; Vere-Jones, D. (2003). An Introduction to the Theory of Point Processes. Probability and its Applications. doi:10.1007/b97277. ISBN 0-387-95541-0.
  6. ^ Kallenberg, Olav (2017). Random Measures, Theory and Applications. Probability Theory and Stochastic Modelling. Vol. 77. Switzerland: Springer. p. 52. doi:10.1007/978-3-319-41598-7. ISBN 978-3-319-41596-3.
  7. ^ "Crisan, D., Particle Filters: A Theoretical Perspective, in Sequential Monte Carlo in Practice, Doucet, A., de Freitas, N. and Gordon, N. (Eds), Springer, 2001, ISBN 0-387-95146-6

Text is available under the "Creative Commons Attribution-ShareAlike License." Additional terms may apply.

↑