Mathew Penrose
- Published in print:
- 2003
- Published Online:
- September 2007
- ISBN:
- 9780198506263
- eISBN:
- 9780191707858
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780198506263.003.0001
- Subject:
- Mathematics, Probability / Statistics
This introductory chapter contains a general discussion of both the historical and the applied background behind the study of random geometric graphs. A brief overview is presented, along with some ...
More
This introductory chapter contains a general discussion of both the historical and the applied background behind the study of random geometric graphs. A brief overview is presented, along with some standard definitions in graph theory and probability theory. Specific terminology is introduced for two limiting regimes in the choice of r=r(n) (namely the thermodynamic limit where the mean vertex degree is made to approach a finite constant) and the connectivity regime (where it grows logarithmically with n). Some elementary probabilistic results are given on large deviations for the binomial and Poisson distribution, and on Poisson point processes.Less
This introductory chapter contains a general discussion of both the historical and the applied background behind the study of random geometric graphs. A brief overview is presented, along with some standard definitions in graph theory and probability theory. Specific terminology is introduced for two limiting regimes in the choice of r=r(n) (namely the thermodynamic limit where the mean vertex degree is made to approach a finite constant) and the connectivity regime (where it grows logarithmically with n). Some elementary probabilistic results are given on large deviations for the binomial and Poisson distribution, and on Poisson point processes.
Stephen J. Blundell and Katherine M. Blundell
- Published in print:
- 2009
- Published Online:
- January 2010
- ISBN:
- 9780199562091
- eISBN:
- 9780191718236
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780199562091.003.0003
- Subject:
- Physics, Particle Physics / Astrophysics / Cosmology
This chapter defines some basic concepts in probability theory. It begins by stating that the probability of occurrence of a particular event, taken from a finite set of possible events, is zero if ...
More
This chapter defines some basic concepts in probability theory. It begins by stating that the probability of occurrence of a particular event, taken from a finite set of possible events, is zero if that event is impossible, is one if that event is certain, and takes a value somewhere in between zero and one if that event is possible but not certain. It considers two different types of probability distribution: discrete and continuous. Variance, linear transformation, independent variables, and binomial distribution are also discussed.Less
This chapter defines some basic concepts in probability theory. It begins by stating that the probability of occurrence of a particular event, taken from a finite set of possible events, is zero if that event is impossible, is one if that event is certain, and takes a value somewhere in between zero and one if that event is possible but not certain. It considers two different types of probability distribution: discrete and continuous. Variance, linear transformation, independent variables, and binomial distribution are also discussed.
Therese M. Donovan and Ruth M. Mickey
- Published in print:
- 2019
- Published Online:
- July 2019
- ISBN:
- 9780198841296
- eISBN:
- 9780191876820
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/oso/9780198841296.003.0008
- Subject:
- Biology, Biomathematics / Statistics and Data Analysis / Complexity Studies
This chapter focuses on probability mass functions. One of the primary uses of Bayesian inference is to estimate parameters. To do so, it is necessary to first build a good understanding of ...
More
This chapter focuses on probability mass functions. One of the primary uses of Bayesian inference is to estimate parameters. To do so, it is necessary to first build a good understanding of probability distributions. This chapter introduces the idea of a random variable and presents general concepts associated with probability distributions for discrete random variables. It starts off by discussing the concept of a function and goes on to describe how a random variable is a type of function. The binomial distribution and the Bernoulli distribution are then used as examples of the probability mass functions (pmf’s). The pmfs can be used to specify prior distributions, likelihoods, likelihood profiles and/or posterior distributions in Bayesian inference.Less
This chapter focuses on probability mass functions. One of the primary uses of Bayesian inference is to estimate parameters. To do so, it is necessary to first build a good understanding of probability distributions. This chapter introduces the idea of a random variable and presents general concepts associated with probability distributions for discrete random variables. It starts off by discussing the concept of a function and goes on to describe how a random variable is a type of function. The binomial distribution and the Bernoulli distribution are then used as examples of the probability mass functions (pmf’s). The pmfs can be used to specify prior distributions, likelihoods, likelihood profiles and/or posterior distributions in Bayesian inference.
James Davidson
- Published in print:
- 1994
- Published Online:
- November 2003
- ISBN:
- 9780198774037
- eISBN:
- 9780191596117
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/0198774036.003.0008
- Subject:
- Economics and Finance, Econometrics
Specializing the concepts of Ch. 7 to the case of real variables, this chapter introduces distribution functions, discrete and continuous distributions, and describes examples such as the binomial, ...
More
Specializing the concepts of Ch. 7 to the case of real variables, this chapter introduces distribution functions, discrete and continuous distributions, and describes examples such as the binomial, uniform, Gaussian, and Cauchy distributions. It then treats multivariate distributions and the concept of independence.Less
Specializing the concepts of Ch. 7 to the case of real variables, this chapter introduces distribution functions, discrete and continuous distributions, and describes examples such as the binomial, uniform, Gaussian, and Cauchy distributions. It then treats multivariate distributions and the concept of independence.
Andy Hector
- Published in print:
- 2015
- Published Online:
- March 2015
- ISBN:
- 9780198729051
- eISBN:
- 9780191795855
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780198729051.003.0009
- Subject:
- Biology, Biomathematics / Statistics and Data Analysis / Complexity Studies, Ecology
This chapter looks at three of the main types of generalized linear model (GLM). GLMs using the Poisson distribution are a good starting place when dealing with integer count data. The default log ...
More
This chapter looks at three of the main types of generalized linear model (GLM). GLMs using the Poisson distribution are a good starting place when dealing with integer count data. The default log link function prevents the prediction of negative counts and the Poisson distribution models the variance (approximately equal to the mean). GLMs with a binomial distribution are designed for the analysis of binomial counts (how many times something occurred relative to the total number of possible times it could have occurred). A logistic link function constrains predictions to be above zero and below the maximum using the S-shaped logistic curve. Overdispersion can be diagnosed and dealt with using a quasi-maximum likelihood extension to GLM analysis. Binomial GLMs can also be used to analyse binary data as a special case with some minor differences to the analysis introduced by the constrained nature of the binary data.Less
This chapter looks at three of the main types of generalized linear model (GLM). GLMs using the Poisson distribution are a good starting place when dealing with integer count data. The default log link function prevents the prediction of negative counts and the Poisson distribution models the variance (approximately equal to the mean). GLMs with a binomial distribution are designed for the analysis of binomial counts (how many times something occurred relative to the total number of possible times it could have occurred). A logistic link function constrains predictions to be above zero and below the maximum using the S-shaped logistic curve. Overdispersion can be diagnosed and dealt with using a quasi-maximum likelihood extension to GLM analysis. Binomial GLMs can also be used to analyse binary data as a special case with some minor differences to the analysis introduced by the constrained nature of the binary data.
Andy Hector
- Published in print:
- 2021
- Published Online:
- August 2021
- ISBN:
- 9780198798170
- eISBN:
- 9780191839399
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/oso/9780198798170.003.0017
- Subject:
- Biology, Biomathematics / Statistics and Data Analysis / Complexity Studies, Ecology
GLMs with a binomial distribution are designed for the analysis of binomial counts (how many times something occurred relative to the total number of possible times it could have occurred). A ...
More
GLMs with a binomial distribution are designed for the analysis of binomial counts (how many times something occurred relative to the total number of possible times it could have occurred). A logistic link function constrains predictions to be above zero and below the maximum using the S-shaped logistic curve. Overdispersion can be diagnosed and dealt with using a quasi-maximum likelihood extension to GLM analysis.Less
GLMs with a binomial distribution are designed for the analysis of binomial counts (how many times something occurred relative to the total number of possible times it could have occurred). A logistic link function constrains predictions to be above zero and below the maximum using the S-shaped logistic curve. Overdispersion can be diagnosed and dealt with using a quasi-maximum likelihood extension to GLM analysis.
Andy Hector
- Published in print:
- 2021
- Published Online:
- August 2021
- ISBN:
- 9780198798170
- eISBN:
- 9780191839399
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/oso/9780198798170.003.0018
- Subject:
- Biology, Biomathematics / Statistics and Data Analysis / Complexity Studies, Ecology
Binomial GLMs can also be used to analyse binary data as a special case, with some minor differences introduced into the analysis by the constrained nature of the binary data.
Binomial GLMs can also be used to analyse binary data as a special case, with some minor differences introduced into the analysis by the constrained nature of the binary data.
Abraham Nitzan
- Published in print:
- 2006
- Published Online:
- November 2020
- ISBN:
- 9780198529798
- eISBN:
- 9780191916649
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/oso/9780198529798.003.0013
- Subject:
- Chemistry, Physical Chemistry
As discussed in Section 1.5, the characterization of observables as random variables is ubiquitous in descriptions of physical phenomena. This is not immediately obvious in view of the fact that ...
More
As discussed in Section 1.5, the characterization of observables as random variables is ubiquitous in descriptions of physical phenomena. This is not immediately obvious in view of the fact that the physical equations of motion are deterministic and this issue was discussed in Section 1.5.1. Random functions, ordered sequences of random variable, were discussed in Section 1.5.3. The focus of this chapter is a particular class of random functions, stochastic processes, for which the ordering parameter is time. Time is a continuous ordering parameter, however in many practical situations observations of the random function z(t) are made at discrete time 0 < t1 < t2, . . . ,< tn < T. In this case the sequence {z(ti)} is a discrete sample of the stochastic process z(t). Let us start with an example. Consider a stretch of highway between two intersections, and let the variable of interest be the number of cars within this road segment at any given time, N(t). This number is obviously a random function of time whose properties can be deduced from observation and also from experience and intuition. First, this function takes positive integer values but this is of no significance: we could redefine N → N − (N) and the new variable will assume both positive and negative values. Second and more significantly, this function is characterized by several timescales: 1. Let τ1 is the average time it takes a car to go through this road segment, for example 1 min, and compare N(t) and N(t +∆ t) for ∆ t << τ1 and ∆ tτ1. Obviously N(t) ≈ N(t + ∆ t) for ∆ t << τ1 while in the opposite case the random numbers N(t) and N(t + ∆ t) will be almost uncorrelated. Figure 7.1 shows a typical result of one observation of this kind. The apparent lack of correlations between successive points in this data set expresses the fact that numbers sampled at intervals equal to or longer than the time it takes to traverse the given distance are not correlated. 2. The apparent lack of systematic component in the time series displayed here reflects only a relatively short-time behavior.
Less
As discussed in Section 1.5, the characterization of observables as random variables is ubiquitous in descriptions of physical phenomena. This is not immediately obvious in view of the fact that the physical equations of motion are deterministic and this issue was discussed in Section 1.5.1. Random functions, ordered sequences of random variable, were discussed in Section 1.5.3. The focus of this chapter is a particular class of random functions, stochastic processes, for which the ordering parameter is time. Time is a continuous ordering parameter, however in many practical situations observations of the random function z(t) are made at discrete time 0 < t1 < t2, . . . ,< tn < T. In this case the sequence {z(ti)} is a discrete sample of the stochastic process z(t). Let us start with an example. Consider a stretch of highway between two intersections, and let the variable of interest be the number of cars within this road segment at any given time, N(t). This number is obviously a random function of time whose properties can be deduced from observation and also from experience and intuition. First, this function takes positive integer values but this is of no significance: we could redefine N → N − (N) and the new variable will assume both positive and negative values. Second and more significantly, this function is characterized by several timescales: 1. Let τ1 is the average time it takes a car to go through this road segment, for example 1 min, and compare N(t) and N(t +∆ t) for ∆ t << τ1 and ∆ tτ1. Obviously N(t) ≈ N(t + ∆ t) for ∆ t << τ1 while in the opposite case the random numbers N(t) and N(t + ∆ t) will be almost uncorrelated. Figure 7.1 shows a typical result of one observation of this kind. The apparent lack of correlations between successive points in this data set expresses the fact that numbers sampled at intervals equal to or longer than the time it takes to traverse the given distance are not correlated. 2. The apparent lack of systematic component in the time series displayed here reflects only a relatively short-time behavior.
James Davidson
- Published in print:
- 2021
- Published Online:
- November 2021
- ISBN:
- 9780192844507
- eISBN:
- 9780191927201
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/oso/9780192844507.003.0008
- Subject:
- Economics and Finance, Econometrics
Specializing the concepts of Chapter 7 to the case of real variables, this chapter introduces distribution functions, discrete and continuous distributions, and describes examples such as the ...
More
Specializing the concepts of Chapter 7 to the case of real variables, this chapter introduces distribution functions, discrete and continuous distributions, and describes examples such as the binomial, uniform, Gaussian, Cauchy, and gamma distributions. It then treats multivariate distributions and the concept of independence.Less
Specializing the concepts of Chapter 7 to the case of real variables, this chapter introduces distribution functions, discrete and continuous distributions, and describes examples such as the binomial, uniform, Gaussian, Cauchy, and gamma distributions. It then treats multivariate distributions and the concept of independence.
N. Scafetta, P. Grigolini, P. Hamilton, and B. J. West
- Published in print:
- 2004
- Published Online:
- November 2020
- ISBN:
- 9780195159769
- eISBN:
- 9780197562024
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/oso/9780195159769.003.0021
- Subject:
- Earth Sciences and Geography, Atmospheric Sciences
A complex process is often a balance between nonscaling and scaling components. We show how the nonextensive Tsallis g-entropy indicator may be interpreted as a measure of the nonscaling condition ...
More
A complex process is often a balance between nonscaling and scaling components. We show how the nonextensive Tsallis g-entropy indicator may be interpreted as a measure of the nonscaling condition in time series. This is done by applying the nonextensive entropy formalism to the diffusion entropy analysis (DEA). We apply the analysis to the study of the teen birth phenomenon. We find that the number of unmarried teen births is strongly influenced by social processes that induce an anomalous memory in the data. This memory is related to the strength of the nonscaling component of the signal and is more intense than that in the married teen birth time series. By using a wavelet multiresolution analysis, we attempt to provide a social interpretation of this effect…. One of the most exciting and rapidly developing areas of modern research is the quantitative study of "complexity." Complexity has special interdisciplinary impacts in the fields of physics, mathematics, information science, biology, sociology, and medicine. No definition of a complex system has been universally embraced, so here we adopt the working definition, "an arrangement of parts so intricate as to be hard to understand or deal with." Therefore, the main goal of the science of complexity is to develop mathematical methods in order to discriminate among the fundamental microscopic and macroscopic constituents of a complex system and to describe their interrelations in a concise way. Experiments usually yield results in the form of time series for physical observables. Typically, these time series contain both a slow regular variation, usually called a "signal," and a rapid erratic fluctuation, usually called "noise." Historically, the techniques applied to processing such time series have been based on equilibrium statistical mechanics and, therefore, they are not applicable to phenomena far from equilibrium. Among the fluctuating phenomena, a particularly important place is occupied by those phenomena characterized by some type of self-similar or scaling-fractal structures [4]. In this chapter we show that the nonextensive Tsallis g-entropy indicator may be interpreted as a measure of the strength of the nonscaling component of a time series.
Less
A complex process is often a balance between nonscaling and scaling components. We show how the nonextensive Tsallis g-entropy indicator may be interpreted as a measure of the nonscaling condition in time series. This is done by applying the nonextensive entropy formalism to the diffusion entropy analysis (DEA). We apply the analysis to the study of the teen birth phenomenon. We find that the number of unmarried teen births is strongly influenced by social processes that induce an anomalous memory in the data. This memory is related to the strength of the nonscaling component of the signal and is more intense than that in the married teen birth time series. By using a wavelet multiresolution analysis, we attempt to provide a social interpretation of this effect…. One of the most exciting and rapidly developing areas of modern research is the quantitative study of "complexity." Complexity has special interdisciplinary impacts in the fields of physics, mathematics, information science, biology, sociology, and medicine. No definition of a complex system has been universally embraced, so here we adopt the working definition, "an arrangement of parts so intricate as to be hard to understand or deal with." Therefore, the main goal of the science of complexity is to develop mathematical methods in order to discriminate among the fundamental microscopic and macroscopic constituents of a complex system and to describe their interrelations in a concise way. Experiments usually yield results in the form of time series for physical observables. Typically, these time series contain both a slow regular variation, usually called a "signal," and a rapid erratic fluctuation, usually called "noise." Historically, the techniques applied to processing such time series have been based on equilibrium statistical mechanics and, therefore, they are not applicable to phenomena far from equilibrium. Among the fluctuating phenomena, a particularly important place is occupied by those phenomena characterized by some type of self-similar or scaling-fractal structures [4]. In this chapter we show that the nonextensive Tsallis g-entropy indicator may be interpreted as a measure of the strength of the nonscaling component of a time series.
Abraham Nitzan
- Published in print:
- 2006
- Published Online:
- November 2020
- ISBN:
- 9780198529798
- eISBN:
- 9780191916649
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/oso/9780198529798.003.0006
- Subject:
- Chemistry, Physical Chemistry
This chapter reviews some subjects in mathematics and physics that are used in different contexts throughout this book. The selection of subjects and the level of their coverage reflect the ...
More
This chapter reviews some subjects in mathematics and physics that are used in different contexts throughout this book. The selection of subjects and the level of their coverage reflect the author’s perception of what potential users of this text were exposed to in their earlier studies. Therefore, only brief overview is given of some subjects while somewhat more comprehensive discussion is given of others. In neither case can the coverage provided substitute for the actual learning of these subjects that are covered in detail by many textbooks. A random variable is an observable whose repeated determination yields a series of numerical values (“realizations” of the random variable) that vary from trial to trial in a way characteristic of the observable. The outcomes of tossing a coin or throwing a die are familiar examples of discrete random variables. The position of a dust particle in air and the lifetime of a light bulb are continuous random variables. Discrete random variables are characterized by probability distributions; Pn denotes the probability that a realization of the given random variable is n. Continuous random variables are associated with probability density functions P(x): P(x1)dx denotes the probability that the realization of the variable x will be in the interval x1 . . . x1+dx.
Less
This chapter reviews some subjects in mathematics and physics that are used in different contexts throughout this book. The selection of subjects and the level of their coverage reflect the author’s perception of what potential users of this text were exposed to in their earlier studies. Therefore, only brief overview is given of some subjects while somewhat more comprehensive discussion is given of others. In neither case can the coverage provided substitute for the actual learning of these subjects that are covered in detail by many textbooks. A random variable is an observable whose repeated determination yields a series of numerical values (“realizations” of the random variable) that vary from trial to trial in a way characteristic of the observable. The outcomes of tossing a coin or throwing a die are familiar examples of discrete random variables. The position of a dust particle in air and the lifetime of a light bulb are continuous random variables. Discrete random variables are characterized by probability distributions; Pn denotes the probability that a realization of the given random variable is n. Continuous random variables are associated with probability density functions P(x): P(x1)dx denotes the probability that the realization of the variable x will be in the interval x1 . . . x1+dx.
W. Mark Saltzman
- Published in print:
- 2001
- Published Online:
- November 2020
- ISBN:
- 9780195085891
- eISBN:
- 9780197560501
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/oso/9780195085891.003.0008
- Subject:
- Chemistry, Medicinal Chemistry
Most biological processes occur in an environment that is predominantly water: a typical cell contains 70-85% water and the extracellular space of most tissues is 99%. Even the brain, with its ...
More
Most biological processes occur in an environment that is predominantly water: a typical cell contains 70-85% water and the extracellular space of most tissues is 99%. Even the brain, with its complex arrangement of cells and myelinated processes, is ≈ 80% water. Drug molecules can be introduced into the body in a variety of ways; the effectiveness of drug therapy depends on the rate and extent to which drug molecules can move through tissue structures to reach their site of action. Since water serves as the primary milieu for life processes, it is essential to understand the factors that determine rates of molecular movement in aqueous environments. As we will see, rates of diffusive transport of molecules vary among biological tissues within an organism, even though the bulk composition of the tissues (i.e., their water content) may be similar. The section begins with the random walk, a useful model from statistical physics that provides insight into the kinetics of molecular diffusion. From this starting point, the fundamental relationship between diffusive flux and solute concentration, Fick’s law, is described and used to develop general mass-conservation equations. These conservation equations are essential for analysis of rates of solute transport in tissues. Molecules that are initially localized within an unstirred vessel will spread throughout the vessel, eventually becoming uniformly dispersed. This process, called diffusion, occurs by the random movement of individual molecules; molecular motion is generated by thermal energy.
Less
Most biological processes occur in an environment that is predominantly water: a typical cell contains 70-85% water and the extracellular space of most tissues is 99%. Even the brain, with its complex arrangement of cells and myelinated processes, is ≈ 80% water. Drug molecules can be introduced into the body in a variety of ways; the effectiveness of drug therapy depends on the rate and extent to which drug molecules can move through tissue structures to reach their site of action. Since water serves as the primary milieu for life processes, it is essential to understand the factors that determine rates of molecular movement in aqueous environments. As we will see, rates of diffusive transport of molecules vary among biological tissues within an organism, even though the bulk composition of the tissues (i.e., their water content) may be similar. The section begins with the random walk, a useful model from statistical physics that provides insight into the kinetics of molecular diffusion. From this starting point, the fundamental relationship between diffusive flux and solute concentration, Fick’s law, is described and used to develop general mass-conservation equations. These conservation equations are essential for analysis of rates of solute transport in tissues. Molecules that are initially localized within an unstirred vessel will spread throughout the vessel, eventually becoming uniformly dispersed. This process, called diffusion, occurs by the random movement of individual molecules; molecular motion is generated by thermal energy.
Robert N. Compton and Michael A. Duncan
- Published in print:
- 2015
- Published Online:
- December 2015
- ISBN:
- 9780198742975
- eISBN:
- 9780191816932
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780198742975.003.0019
- Subject:
- Physics, Atomic, Laser, and Optical Physics
Laser Raman spectroscopy has been a powerful tool for the characterization of molecules for many years. This chapter describes a novel technique for the recording of Raman spectra under liquid ...
More
Laser Raman spectroscopy has been a powerful tool for the characterization of molecules for many years. This chapter describes a novel technique for the recording of Raman spectra under liquid nitrogen (RUN) which reduces sample burning and increases resolution. An introduction to the theory of Raman spectroscopy is followed by a description of how to perform RUN spectroscopy with examples using CCl4, CS2, and ferrocene. Computations using the Gaussian program are also described for the molecules studied. The calculation of isotopic distributions is also treated and applied to the interpretation of the spectra.Less
Laser Raman spectroscopy has been a powerful tool for the characterization of molecules for many years. This chapter describes a novel technique for the recording of Raman spectra under liquid nitrogen (RUN) which reduces sample burning and increases resolution. An introduction to the theory of Raman spectroscopy is followed by a description of how to perform RUN spectroscopy with examples using CCl4, CS2, and ferrocene. Computations using the Gaussian program are also described for the molecules studied. The calculation of isotopic distributions is also treated and applied to the interpretation of the spectra.
S. Lakshmivarahan and Sudarshan K. Dhall
- Published in print:
- 1994
- Published Online:
- November 2020
- ISBN:
- 9780195088496
- eISBN:
- 9780197560549
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/oso/9780195088496.003.0010
- Subject:
- Computer Science, Mathematical Theory of Computation
This Chapter describes algorithms for computing prefixes/suffixes in parallel when the input data is in the form of a linked list. Developments in this Chapter complement those in Chapter 3. We ...
More
This Chapter describes algorithms for computing prefixes/suffixes in parallel when the input data is in the form of a linked list. Developments in this Chapter complement those in Chapter 3. We begin by defining a version of the prefix problem called the list ranking problem. Let < N > = {1,2, … , N} and L be a list of size N. For each i ∈ < N >, the node i in L contains two types of information: the value v(i) of node i, and the successor s(i) of node i. Clearly, s(N) = 0. A linked list may conveniently be represented as a directed, labeled graph G(V, E), where V = <N > and … E = { (i, j ) | j = s(i), i, j ∈ V }, and v (i) denotes the value for node i.
Less
This Chapter describes algorithms for computing prefixes/suffixes in parallel when the input data is in the form of a linked list. Developments in this Chapter complement those in Chapter 3. We begin by defining a version of the prefix problem called the list ranking problem. Let < N > = {1,2, … , N} and L be a list of size N. For each i ∈ < N >, the node i in L contains two types of information: the value v(i) of node i, and the successor s(i) of node i. Clearly, s(N) = 0. A linked list may conveniently be represented as a directed, labeled graph G(V, E), where V = <N > and … E = { (i, j ) | j = s(i), i, j ∈ V }, and v (i) denotes the value for node i.