Jan Modersitzki
- Published in print:
- 2003
- Published Online:
- September 2007
- ISBN:
- 9780198528418
- eISBN:
- 9780191713583
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780198528418.003.0007
- Subject:
- Mathematics, Applied Mathematics
This chapter summarizes the techniques discussed so far in this book. The techniques are all based on the minimization of a certain distance measure, and the distance measure is based on image ...
More
This chapter summarizes the techniques discussed so far in this book. The techniques are all based on the minimization of a certain distance measure, and the distance measure is based on image features or directly on image intensities. Image features can be user supplied (e.g., landmarks) or may be deduced automatically from the image intensities (e.g., principal axes). Typical examples of intensity-based distance measures are the sum of squared differences, correlation or mutual information. For all proposed techniques, the transformation is parametric, i.e., it can be expanded in terms of some parameters and basis functions. The desired transformation is a minimizer of the distance measure in the space spanned by the basis functions. The minimizer can be obtained from algebraic equations or by applying appropriate optimization tools.Less
This chapter summarizes the techniques discussed so far in this book. The techniques are all based on the minimization of a certain distance measure, and the distance measure is based on image features or directly on image intensities. Image features can be user supplied (e.g., landmarks) or may be deduced automatically from the image intensities (e.g., principal axes). Typical examples of intensity-based distance measures are the sum of squared differences, correlation or mutual information. For all proposed techniques, the transformation is parametric, i.e., it can be expanded in terms of some parameters and basis functions. The desired transformation is a minimizer of the distance measure in the space spanned by the basis functions. The minimizer can be obtained from algebraic equations or by applying appropriate optimization tools.
Jan Modersitzki
- Published in print:
- 2003
- Published Online:
- September 2007
- ISBN:
- 9780198528418
- eISBN:
- 9780191713583
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780198528418.003.0006
- Subject:
- Mathematics, Applied Mathematics
This chapter investigates the question of how to find an optimal linear transformation based on a distance measure. Popular choices for distance measures such as the sum of squared differences, ...
More
This chapter investigates the question of how to find an optimal linear transformation based on a distance measure. Popular choices for distance measures such as the sum of squared differences, correlation, and mutual information are discussed. Particular attention is paid to the differentiability of the distance measures. The desired transformation is restricted to a parameterizable space, and as such can be expanded in terms of a linear combination of some basis functions. The registration task is considered as an optimization problem, where the objective is to find the optimal coefficient in the expansion while minimizing the distance measure. The well-known Gauss-Newton method is described and used for numerical optimization. Different examples are used to identify similarities and differences of the distance measures.Less
This chapter investigates the question of how to find an optimal linear transformation based on a distance measure. Popular choices for distance measures such as the sum of squared differences, correlation, and mutual information are discussed. Particular attention is paid to the differentiability of the distance measures. The desired transformation is restricted to a parameterizable space, and as such can be expanded in terms of a linear combination of some basis functions. The registration task is considered as an optimization problem, where the objective is to find the optimal coefficient in the expansion while minimizing the distance measure. The well-known Gauss-Newton method is described and used for numerical optimization. Different examples are used to identify similarities and differences of the distance measures.
Gennaro Auletta
- Published in print:
- 2011
- Published Online:
- September 2011
- ISBN:
- 9780199608485
- eISBN:
- 9780191729539
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780199608485.003.0003
- Subject:
- Physics, Soft Matter / Biological Physics
Here it is shown that quantum systems can be understood as information processors. Information and entropy are related quantities but also different, since the first is formal whilst the second is ...
More
Here it is shown that quantum systems can be understood as information processors. Information and entropy are related quantities but also different, since the first is formal whilst the second is dynamical. Both quantum and classical information acquisition are a three-step process that needs a processor, a regulator, and a decider.Less
Here it is shown that quantum systems can be understood as information processors. Information and entropy are related quantities but also different, since the first is formal whilst the second is dynamical. Both quantum and classical information acquisition are a three-step process that needs a processor, a regulator, and a decider.
Marc Mézard and Andrea Montanari
- Published in print:
- 2009
- Published Online:
- September 2009
- ISBN:
- 9780198570837
- eISBN:
- 9780191718755
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780198570837.003.0001
- Subject:
- Physics, Theoretical, Computational, and Statistical Physics
This chapter introduces some of the basic concepts of information theory, as well as the definitions and notations of probability theory that are used throughout the book. It defines the fundamental ...
More
This chapter introduces some of the basic concepts of information theory, as well as the definitions and notations of probability theory that are used throughout the book. It defines the fundamental notions of entropy, relative entropy, and mutual information. It also presents the main questions of information theory: data compression and data transmission. Finally, it offers a brief introduction to error correcting codes and Shannon's theory.Less
This chapter introduces some of the basic concepts of information theory, as well as the definitions and notations of probability theory that are used throughout the book. It defines the fundamental notions of entropy, relative entropy, and mutual information. It also presents the main questions of information theory: data compression and data transmission. Finally, it offers a brief introduction to error correcting codes and Shannon's theory.
Gennaro Auletta
- Published in print:
- 2011
- Published Online:
- September 2011
- ISBN:
- 9780199608485
- eISBN:
- 9780191729539
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780199608485.003.0007
- Subject:
- Physics, Soft Matter / Biological Physics
In order to explain how the brain and also elementary organisms are able to refer to external things and processes we need to consider complexity. Complexity is a specific combination of order and ...
More
In order to explain how the brain and also elementary organisms are able to refer to external things and processes we need to consider complexity. Complexity is a specific combination of order and disorder in which several subsystems are interconnected but do not share an overall information. This allows for information encapsulation and modularization as well as for the necessary plasticity of organisms. A proto-metabolism can emerge when several autocatalytic processes are interconnected.Less
In order to explain how the brain and also elementary organisms are able to refer to external things and processes we need to consider complexity. Complexity is a specific combination of order and disorder in which several subsystems are interconnected but do not share an overall information. This allows for information encapsulation and modularization as well as for the necessary plasticity of organisms. A proto-metabolism can emerge when several autocatalytic processes are interconnected.
Kevin B. Korb, Erik P. Nyberg, and Lucas Hope
- Published in print:
- 2011
- Published Online:
- September 2011
- ISBN:
- 9780199574131
- eISBN:
- 9780191728921
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780199574131.003.0030
- Subject:
- Mathematics, Logic / Computer Science / Mathematical Philosophy
The causal power of C over E is (roughly) the degree to which changes in C cause changes in E. A formal measure of causal power would be very useful, as an aid to understanding and modelling complex ...
More
The causal power of C over E is (roughly) the degree to which changes in C cause changes in E. A formal measure of causal power would be very useful, as an aid to understanding and modelling complex stochastic systems. Previous attempts to measure causal power, such as those of Good (1961), Cheng (1997), and Glymour (2001), while useful, suffer from one fundamental flaw: they only give sensible results when applied to very restricted types of causal system, all of which exhibit causal transitivity. Causal Bayesian networks, however, are not in general transitive. The chapter develops an information‐theoretic alternative, causal information, which applies to any kind of causal Bayesian network. Causal information is based upon three ideas. First, the chapter assumes that the system can be represented causally as a Bayesian network. Second, the chapter uses hypothetical interventions to select the causal from the non‐causal paths connecting C to E. Third, we use a variation on the information‐theoretic measure mutual information to summarize the total causal influence of C on E. The chapter's measure gives sensible results for a much wider variety of complex stochastic systems than previous attempts and promises to simplify the interpretation and application of Bayesian networks.Less
The causal power of C over E is (roughly) the degree to which changes in C cause changes in E. A formal measure of causal power would be very useful, as an aid to understanding and modelling complex stochastic systems. Previous attempts to measure causal power, such as those of Good (1961), Cheng (1997), and Glymour (2001), while useful, suffer from one fundamental flaw: they only give sensible results when applied to very restricted types of causal system, all of which exhibit causal transitivity. Causal Bayesian networks, however, are not in general transitive. The chapter develops an information‐theoretic alternative, causal information, which applies to any kind of causal Bayesian network. Causal information is based upon three ideas. First, the chapter assumes that the system can be represented causally as a Bayesian network. Second, the chapter uses hypothetical interventions to select the causal from the non‐causal paths connecting C to E. Third, we use a variation on the information‐theoretic measure mutual information to summarize the total causal influence of C on E. The chapter's measure gives sensible results for a much wider variety of complex stochastic systems than previous attempts and promises to simplify the interpretation and application of Bayesian networks.
Vlatko Vedral
- Published in print:
- 2006
- Published Online:
- January 2010
- ISBN:
- 9780199215706
- eISBN:
- 9780191706783
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780199215706.003.0005
- Subject:
- Physics, Theoretical, Computational, and Statistical Physics
Information is often considered classical in a definite state rather than in a superposition of states. It seems rather strange to consider information in superpositions. Some people would, on the ...
More
Information is often considered classical in a definite state rather than in a superposition of states. It seems rather strange to consider information in superpositions. Some people would, on the basis of this argument, conclude that quantum information can never exist and we can only have access to classical information. It turns out, however, that quantum information can be quantified in the same way as classical information using Shannon's prescription. There is a unique measure (up to a constant additive or multiplicative term) of quantum information such that S (the von Neumann entropy) is purely a function of the probabilities of outcomes of measurements made on a quantum system (that is, a function of a density operator); S is a continuous function of probability; S is additive. This chapter discusses the fidelity of pure quantum states, Helstrom's discrimination, quantum data compression, entropy of observation, conditional entropy and mutual information, relative entropy, and statistical interpretation of relative entropy.Less
Information is often considered classical in a definite state rather than in a superposition of states. It seems rather strange to consider information in superpositions. Some people would, on the basis of this argument, conclude that quantum information can never exist and we can only have access to classical information. It turns out, however, that quantum information can be quantified in the same way as classical information using Shannon's prescription. There is a unique measure (up to a constant additive or multiplicative term) of quantum information such that S (the von Neumann entropy) is purely a function of the probabilities of outcomes of measurements made on a quantum system (that is, a function of a density operator); S is a continuous function of probability; S is additive. This chapter discusses the fidelity of pure quantum states, Helstrom's discrimination, quantum data compression, entropy of observation, conditional entropy and mutual information, relative entropy, and statistical interpretation of relative entropy.
Miquel Feixas and Mateu Sbert
- Published in print:
- 2020
- Published Online:
- December 2020
- ISBN:
- 9780190636685
- eISBN:
- 9780190636722
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/oso/9780190636685.003.0017
- Subject:
- Economics and Finance, Microeconomics
Around seventy years ago, Claude Shannon, who was working at Bell Laboratories, introduced information theory with the main purpose of dealing with the communication channel between source and ...
More
Around seventy years ago, Claude Shannon, who was working at Bell Laboratories, introduced information theory with the main purpose of dealing with the communication channel between source and receiver. The communication channel, or information channel as it later became known, establishes the shared information between the source or input and the receiver or output, both of which are represented by random variables, that is, by probability distributions over their possible states. The generality and flexibility of the information channel concept can be robustly applied to numerous, different areas of science and technology, even the social sciences. In this chapter, we will present examples of its application to select the best viewpoints of an object, to segment an image, and to compute the global illumination of a three-dimensional virtual scene. We hope that our examples will illustrate how the practitioners of different disciplines can use it for the purpose of organizing and understanding the interplay of information between the corresponding source and receiver.Less
Around seventy years ago, Claude Shannon, who was working at Bell Laboratories, introduced information theory with the main purpose of dealing with the communication channel between source and receiver. The communication channel, or information channel as it later became known, establishes the shared information between the source or input and the receiver or output, both of which are represented by random variables, that is, by probability distributions over their possible states. The generality and flexibility of the information channel concept can be robustly applied to numerous, different areas of science and technology, even the social sciences. In this chapter, we will present examples of its application to select the best viewpoints of an object, to segment an image, and to compute the global illumination of a three-dimensional virtual scene. We hope that our examples will illustrate how the practitioners of different disciplines can use it for the purpose of organizing and understanding the interplay of information between the corresponding source and receiver.
Masashi Sugiyama and Motoaki Kawanabe
- Published in print:
- 2012
- Published Online:
- September 2013
- ISBN:
- 9780262017091
- eISBN:
- 9780262301220
- Item type:
- chapter
- Publisher:
- The MIT Press
- DOI:
- 10.7551/mitpress/9780262017091.003.0011
- Subject:
- Computer Science, Machine Learning
This chapter summarizes the main themes covered in the preceding discussions and discusses future prospects. This book has provided a comprehensive overview of theory, algorithms, and applications of ...
More
This chapter summarizes the main themes covered in the preceding discussions and discusses future prospects. This book has provided a comprehensive overview of theory, algorithms, and applications of machine learning under covariate shift. Beyond covariate shift adaptation, it has been shown recently that the ratio of probability densities can be used for solving machine learning tasks. This novel machine learning framework includes multitask learning, privacy-preserving data mining, outlier detection, change detection in time series, two-sample test, conditional density estimation, and probabilistic classification. Furthermore, mutual information—which plays a central role in information theory—can be estimated via density ratio estimation.Less
This chapter summarizes the main themes covered in the preceding discussions and discusses future prospects. This book has provided a comprehensive overview of theory, algorithms, and applications of machine learning under covariate shift. Beyond covariate shift adaptation, it has been shown recently that the ratio of probability densities can be used for solving machine learning tasks. This novel machine learning framework includes multitask learning, privacy-preserving data mining, outlier detection, change detection in time series, two-sample test, conditional density estimation, and probabilistic classification. Furthermore, mutual information—which plays a central role in information theory—can be estimated via density ratio estimation.
Jakob Hohwy
- Published in print:
- 2013
- Published Online:
- January 2014
- ISBN:
- 9780199682737
- eISBN:
- 9780191766350
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780199682737.003.0003
- Subject:
- Philosophy, Philosophy of Mind
The central mechanism for hierarchical perceptual inference is prediction on the basis of internal, generative models, revision of model parameters, and minimization of prediction error. This is the ...
More
The central mechanism for hierarchical perceptual inference is prediction on the basis of internal, generative models, revision of model parameters, and minimization of prediction error. This is the way in which the brain engages in perceptual inference, as described in the previous chapter. This chapter describes this idea in detail. It uses a statistical analogy of model fitting to explain the notion of prediction error and then gradually builds up more and more complex versions of the theory, ending with broad ideas from information theory and statistical physics concerning mutual information, free energy and surprisal. The overall picture is of a self-supervised system that is closely supervised by the sensory signal it receives from the world, but which is hidden behind the veil of sensory input. This is a profound reversal of the way we normally think about the top-down and bottom-up signals in the brain. The system is able to recognize the causes of its sensory input in a mechanistic manner, by implicitly inverting its generative model.Less
The central mechanism for hierarchical perceptual inference is prediction on the basis of internal, generative models, revision of model parameters, and minimization of prediction error. This is the way in which the brain engages in perceptual inference, as described in the previous chapter. This chapter describes this idea in detail. It uses a statistical analogy of model fitting to explain the notion of prediction error and then gradually builds up more and more complex versions of the theory, ending with broad ideas from information theory and statistical physics concerning mutual information, free energy and surprisal. The overall picture is of a self-supervised system that is closely supervised by the sensory signal it receives from the world, but which is hidden behind the veil of sensory input. This is a profound reversal of the way we normally think about the top-down and bottom-up signals in the brain. The system is able to recognize the causes of its sensory input in a mechanistic manner, by implicitly inverting its generative model.
Jakob Hohwy
- Published in print:
- 2013
- Published Online:
- January 2014
- ISBN:
- 9780199682737
- eISBN:
- 9780191766350
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780199682737.003.0009
- Subject:
- Philosophy, Philosophy of Mind
This chapter deals with a series of philosophical problems. Primarily, it is the problem of misrepresentation, namely how an account of perceptual content can make room for the possibility of ...
More
This chapter deals with a series of philosophical problems. Primarily, it is the problem of misrepresentation, namely how an account of perceptual content can make room for the possibility of misperceiving the world. This is addressed in terms of average prediction error minimization, and by appeal to notions of mutual information. The chapter delves into deeper issues about the possibility of rule-following and briefly sets out the rather radical way in which the prediction error account, in terms of free energy and the link to statistical physics, can begin to approach this problem. The chapter then situates the prediction error account within the classic debates about representation, as having both causal elements and descriptive elements. After briefly suggesting how such an account might speak to the content of conscious experience, the chapter finally describes which kind of response it would entail for the famous Chinese room problem.Less
This chapter deals with a series of philosophical problems. Primarily, it is the problem of misrepresentation, namely how an account of perceptual content can make room for the possibility of misperceiving the world. This is addressed in terms of average prediction error minimization, and by appeal to notions of mutual information. The chapter delves into deeper issues about the possibility of rule-following and briefly sets out the rather radical way in which the prediction error account, in terms of free energy and the link to statistical physics, can begin to approach this problem. The chapter then situates the prediction error account within the classic debates about representation, as having both causal elements and descriptive elements. After briefly suggesting how such an account might speak to the content of conscious experience, the chapter finally describes which kind of response it would entail for the famous Chinese room problem.
Vlatko Vedral
- Published in print:
- 2018
- Published Online:
- November 2020
- ISBN:
- 9780198815433
- eISBN:
- 9780191917240
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/oso/9780198815433.003.0013
- Subject:
- Computer Science, Mathematical Theory of Computation
Everybody knows a Joe. Joe is the kind of guy who was the most popular boy in class, head boy at school, the life and soul of the party, and whenever he needs something, it just seems to happen for ...
More
Everybody knows a Joe. Joe is the kind of guy who was the most popular boy in class, head boy at school, the life and soul of the party, and whenever he needs something, it just seems to happen for him. This is the guy we love to hate! Why is he getting all the breaks when we have to work so damn hard? As we continue to grind out each day at work, we see Joe is the guy with a big house, fast car, and the most beautiful women swooning over him. Most men would give their right arm to have a bit of that magic. So, how does he do it? Of course, I cannot tell you for sure (if I could my next book would be a bestselling self-help book), but it should come as no surprise that people with more friends and contacts tend to be more successful than people with fewer. Intuitively, we know that these people, by virtue of their wide range of contacts, seem to have more support and opportunity to make the choices they want. Likewise, again it’s no surprise that more interconnected societies tend to be able to cope better with challenging events than ones where people are segregated or isolated. Initially it seems unlikely that this connectedness has anything to do with Shannon’s information theory; after all what does sending a message down a telephone line have to do with how societies function or respond to events? The first substantial clue that information may play some role in sociology came in 1971 from the American economist and Nobel Laureate, Thomas Schelling. Up until his time sociology was a highly qualitative subject (and still predominantly is); however he showed how certain social paradigms could be approached in the same rigorous quantitative manner as other processes where exchange of information is the key driver. Schelling is an interesting character. He served with the Marshall Plan (the plan to help Europe recover after World War II), the White House, and the Executive Office of the President from 1948 to 1953, as well as holding a string of positions at illustrious academic institutions, including Yale and Harvard.Less
Everybody knows a Joe. Joe is the kind of guy who was the most popular boy in class, head boy at school, the life and soul of the party, and whenever he needs something, it just seems to happen for him. This is the guy we love to hate! Why is he getting all the breaks when we have to work so damn hard? As we continue to grind out each day at work, we see Joe is the guy with a big house, fast car, and the most beautiful women swooning over him. Most men would give their right arm to have a bit of that magic. So, how does he do it? Of course, I cannot tell you for sure (if I could my next book would be a bestselling self-help book), but it should come as no surprise that people with more friends and contacts tend to be more successful than people with fewer. Intuitively, we know that these people, by virtue of their wide range of contacts, seem to have more support and opportunity to make the choices they want. Likewise, again it’s no surprise that more interconnected societies tend to be able to cope better with challenging events than ones where people are segregated or isolated. Initially it seems unlikely that this connectedness has anything to do with Shannon’s information theory; after all what does sending a message down a telephone line have to do with how societies function or respond to events? The first substantial clue that information may play some role in sociology came in 1971 from the American economist and Nobel Laureate, Thomas Schelling. Up until his time sociology was a highly qualitative subject (and still predominantly is); however he showed how certain social paradigms could be approached in the same rigorous quantitative manner as other processes where exchange of information is the key driver. Schelling is an interesting character. He served with the Marshall Plan (the plan to help Europe recover after World War II), the White House, and the Executive Office of the President from 1948 to 1953, as well as holding a string of positions at illustrious academic institutions, including Yale and Harvard.
Amos Golan
- Published in print:
- 2017
- Published Online:
- November 2017
- ISBN:
- 9780199349524
- eISBN:
- 9780199349555
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/oso/9780199349524.003.0003
- Subject:
- Economics and Finance, Econometrics
In this chapter I present the key ideas and develop the essential quantitative metrics needed for modeling and inference with limited information. I provide the necessary tools to study the ...
More
In this chapter I present the key ideas and develop the essential quantitative metrics needed for modeling and inference with limited information. I provide the necessary tools to study the traditional maximum-entropy principle, which is the cornerstone for info-metrics. The chapter starts by defining the primary notions of information and entropy as they are related to probabilities and uncertainty. The unique properties of the entropy are explained. The derivations and discussion are extended to multivariable entropies and informational quantities. For completeness, I also discuss the complete list of the Shannon-Khinchin axioms behind the entropy measure. An additional derivation of information and entropy, due to the independently developed work of Wiener, is provided as well.Less
In this chapter I present the key ideas and develop the essential quantitative metrics needed for modeling and inference with limited information. I provide the necessary tools to study the traditional maximum-entropy principle, which is the cornerstone for info-metrics. The chapter starts by defining the primary notions of information and entropy as they are related to probabilities and uncertainty. The unique properties of the entropy are explained. The derivations and discussion are extended to multivariable entropies and informational quantities. For completeness, I also discuss the complete list of the Shannon-Khinchin axioms behind the entropy measure. An additional derivation of information and entropy, due to the independently developed work of Wiener, is provided as well.
Mark Newman
- Published in print:
- 2018
- Published Online:
- October 2018
- ISBN:
- 9780198805090
- eISBN:
- 9780191843235
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/oso/9780198805090.003.0014
- Subject:
- Physics, Theoretical, Computational, and Statistical Physics
A discussion of community structure in networks and methods for its detection. The chapter begins with an introduction to the idea of community structure, followed by descriptions of a range of ...
More
A discussion of community structure in networks and methods for its detection. The chapter begins with an introduction to the idea of community structure, followed by descriptions of a range of methods for finding communities, including modularity maximization, the InfoMap method, methods based on maximum-likelihood fits of models to network data, betweenness-based methods, and hierarchical clustering. Also discussed are methods for assessing algorithm performance, along with a summary of performance studies and their findings. The chapter concludes with a discussion of other types of large-scale structure in networks, such as overlapping and hierarchical communities, core-periphery structure, latent-space structure, and rank structure.Less
A discussion of community structure in networks and methods for its detection. The chapter begins with an introduction to the idea of community structure, followed by descriptions of a range of methods for finding communities, including modularity maximization, the InfoMap method, methods based on maximum-likelihood fits of models to network data, betweenness-based methods, and hierarchical clustering. Also discussed are methods for assessing algorithm performance, along with a summary of performance studies and their findings. The chapter concludes with a discussion of other types of large-scale structure in networks, such as overlapping and hierarchical communities, core-periphery structure, latent-space structure, and rank structure.
Vlatko Vedral
- Published in print:
- 2018
- Published Online:
- November 2020
- ISBN:
- 9780198815433
- eISBN:
- 9780191917240
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/oso/9780198815433.003.0019
- Subject:
- Computer Science, Mathematical Theory of Computation
In Chapter 9 we discussed the idea of a universal Turing machine. This machine is capable of simulating any other machine given sufficient time and energy. For example, we discussed how your fridge ...
More
In Chapter 9 we discussed the idea of a universal Turing machine. This machine is capable of simulating any other machine given sufficient time and energy. For example, we discussed how your fridge microprocessor could be programmed to run Microsoft Windows, then we described Moore’s logic, that computers are becoming faster and smaller. Therefore, one day, a single atom may be able to simulate fully what a present day PC can do. This leads us to the fascinating possibility that every little constituent of our Universe may be able to simulate any other, given enough time and energy. The Universe therefore consists of a great number of little universal quantum computers. But this surely makes the Universe itself the largest quantum computer. So how powerful is our largest quantum computer? How many bits, how many computational steps? What is the total amount of information that the computer can hold? Since our view is that everything in reality is composed of information, it would be useful to know how much information there is in total and whether this total amount is growing or shrinking. The Second Law already tells us that the physical entropy in the Universe is always increasing. Since physical entropy has the same form as Shannon’s information, the Second Law also tells us that the information content of the Universe can only ever increase too. But what does this mean for us? If we consider our objective to be a full understanding of the Universe then we have to accept that the finish line is always moving further and further away from us. We define our reality through the laws and principles that we establish from the information that we gather. Quantum mechanics, for example, gives us a very different reality to what classical mechanics told us. In the Stone Age, the caveman’s perception of reality and what was possible was also markedly different from what Newton would have understood. In this way we process information from the Universe to create our reality. We can think of the Universe as a large balloon, within which there is a smaller balloon, our reality.Less
In Chapter 9 we discussed the idea of a universal Turing machine. This machine is capable of simulating any other machine given sufficient time and energy. For example, we discussed how your fridge microprocessor could be programmed to run Microsoft Windows, then we described Moore’s logic, that computers are becoming faster and smaller. Therefore, one day, a single atom may be able to simulate fully what a present day PC can do. This leads us to the fascinating possibility that every little constituent of our Universe may be able to simulate any other, given enough time and energy. The Universe therefore consists of a great number of little universal quantum computers. But this surely makes the Universe itself the largest quantum computer. So how powerful is our largest quantum computer? How many bits, how many computational steps? What is the total amount of information that the computer can hold? Since our view is that everything in reality is composed of information, it would be useful to know how much information there is in total and whether this total amount is growing or shrinking. The Second Law already tells us that the physical entropy in the Universe is always increasing. Since physical entropy has the same form as Shannon’s information, the Second Law also tells us that the information content of the Universe can only ever increase too. But what does this mean for us? If we consider our objective to be a full understanding of the Universe then we have to accept that the finish line is always moving further and further away from us. We define our reality through the laws and principles that we establish from the information that we gather. Quantum mechanics, for example, gives us a very different reality to what classical mechanics told us. In the Stone Age, the caveman’s perception of reality and what was possible was also markedly different from what Newton would have understood. In this way we process information from the Universe to create our reality. We can think of the Universe as a large balloon, within which there is a smaller balloon, our reality.
Vlatko Vedral
- Published in print:
- 2018
- Published Online:
- November 2020
- ISBN:
- 9780198815433
- eISBN:
- 9780191917240
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/oso/9780198815433.003.0015
- Subject:
- Computer Science, Mathematical Theory of Computation
Spring 2005, whilst sitting at my desk in the physics department at Leeds University, marking yet more exam papers, I was interrupted by a phone call. Interruptions were not such a surprise at the ...
More
Spring 2005, whilst sitting at my desk in the physics department at Leeds University, marking yet more exam papers, I was interrupted by a phone call. Interruptions were not such a surprise at the time, a few weeks previously I had published an article on quantum theory in the popular science magazine, New Scientist, and had since been inundated with all sorts of calls from the public. Most callers were very enthusiastic, clearly demonstrating a healthy appetite for more information on this fascinating topic, albeit occasionally one or two either hadn’t read the article, or perhaps had read into it a little too much. Comments ranging from ‘Can quantum mechanics help prevent my hair loss?’ to someone telling me that they had met their twin brother in a parallel Universe, were par for the course, and I was getting a couple of such questions each day. At Oxford we used to have a board for the most creative questions, especially the ones that clearly demonstrated the person had grasped some of the principles very well, but had then taken them to an extreme, and often, unbeknown to them, had violated several other physical laws on the way. Such questions served to remind us of the responsibility we had in communicating science – to make it clear and approachable but yet to be pragmatic. As a colleague of mine often said – sometimes working with a little physics can be more dangerous than working with none at all. ‘Hello Professor Vedral, my name is Jon Spooner, I’m a theatre director and I am putting together a play on quantum theory’, said the voice as I picked up the phone. ‘I am weaving elements of quantum theory into the play and we want you as a consultant to verify whether we are interpreting it accurately’. Totally stunned for at least a good couple of seconds, I asked myself, ‘This guy is doing what?’ Had I misheard? A play on quantum theory? Anyway it occurred to me that there might be an appetite for something like this, given how successful the production of Copenhagen, a play by Michael Freyn, had been a few years back.Less
Spring 2005, whilst sitting at my desk in the physics department at Leeds University, marking yet more exam papers, I was interrupted by a phone call. Interruptions were not such a surprise at the time, a few weeks previously I had published an article on quantum theory in the popular science magazine, New Scientist, and had since been inundated with all sorts of calls from the public. Most callers were very enthusiastic, clearly demonstrating a healthy appetite for more information on this fascinating topic, albeit occasionally one or two either hadn’t read the article, or perhaps had read into it a little too much. Comments ranging from ‘Can quantum mechanics help prevent my hair loss?’ to someone telling me that they had met their twin brother in a parallel Universe, were par for the course, and I was getting a couple of such questions each day. At Oxford we used to have a board for the most creative questions, especially the ones that clearly demonstrated the person had grasped some of the principles very well, but had then taken them to an extreme, and often, unbeknown to them, had violated several other physical laws on the way. Such questions served to remind us of the responsibility we had in communicating science – to make it clear and approachable but yet to be pragmatic. As a colleague of mine often said – sometimes working with a little physics can be more dangerous than working with none at all. ‘Hello Professor Vedral, my name is Jon Spooner, I’m a theatre director and I am putting together a play on quantum theory’, said the voice as I picked up the phone. ‘I am weaving elements of quantum theory into the play and we want you as a consultant to verify whether we are interpreting it accurately’. Totally stunned for at least a good couple of seconds, I asked myself, ‘This guy is doing what?’ Had I misheard? A play on quantum theory? Anyway it occurred to me that there might be an appetite for something like this, given how successful the production of Copenhagen, a play by Michael Freyn, had been a few years back.
Vlatko Vedral
- Published in print:
- 2018
- Published Online:
- November 2020
- ISBN:
- 9780198815433
- eISBN:
- 9780191917240
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/oso/9780198815433.003.0009
- Subject:
- Computer Science, Mathematical Theory of Computation
The concept of information is so ubiquitous nowadays that it is simply unavoidable. It has revolutionized the way we perceive the world, and for someone not to know that we live in the information ...
More
The concept of information is so ubiquitous nowadays that it is simply unavoidable. It has revolutionized the way we perceive the world, and for someone not to know that we live in the information age would make you wonder where they’ve been for the last 30 years. In this information age we are no longer grappling with steam engines or locomotives; we are now grappling with understanding and improving our information processing abilities – to develop faster computers, more efficient ways to communicate across ever vaster distances, more balanced financial markets, and more efficient societies. A common misconception is that the information age is just technological. Well let me tell you once and for all that it is not! The information age at its heart is about affecting and better understanding just about any process Nature throws at us: physical, biological, sociological, whatever you name it – nothing escapes. Even though many would accept that we live in the age of information, surprisingly the concept of information itself is still often not well understood. In order to see why this is so, it’s perhaps worth reflecting a little on the age that preceded it, the industrial age. Central concepts within the industrial age, which can be said to have begun in the early eighteenth century in the north of England, were work and heat. People have, to date, found these concepts and their applicability much more intuitive and easier to grasp than the equivalent role information plays in the information age. In the industrial age, the useful application of work and heat was largely evident through the resulting machinery, the type of engineering, buildings, ships, trains, etc. It was easy to point your finger and say ‘look, this is a sign of the industrial age’. In Leeds, for example, as I used to take my usual walk down Foundry Street in the area called Holbeck, traces of the industrial revolution were still quite evident. John Marshall’s Temple Mills and Matthew Murray’s Round Foundry are particularly striking examples; grand imposing buildings demanding respect and appreciation for the hundreds of people who worked in squalid conditions and around the clock to ensure that the country remained well fed, clothed, or transported.Less
The concept of information is so ubiquitous nowadays that it is simply unavoidable. It has revolutionized the way we perceive the world, and for someone not to know that we live in the information age would make you wonder where they’ve been for the last 30 years. In this information age we are no longer grappling with steam engines or locomotives; we are now grappling with understanding and improving our information processing abilities – to develop faster computers, more efficient ways to communicate across ever vaster distances, more balanced financial markets, and more efficient societies. A common misconception is that the information age is just technological. Well let me tell you once and for all that it is not! The information age at its heart is about affecting and better understanding just about any process Nature throws at us: physical, biological, sociological, whatever you name it – nothing escapes. Even though many would accept that we live in the age of information, surprisingly the concept of information itself is still often not well understood. In order to see why this is so, it’s perhaps worth reflecting a little on the age that preceded it, the industrial age. Central concepts within the industrial age, which can be said to have begun in the early eighteenth century in the north of England, were work and heat. People have, to date, found these concepts and their applicability much more intuitive and easier to grasp than the equivalent role information plays in the information age. In the industrial age, the useful application of work and heat was largely evident through the resulting machinery, the type of engineering, buildings, ships, trains, etc. It was easy to point your finger and say ‘look, this is a sign of the industrial age’. In Leeds, for example, as I used to take my usual walk down Foundry Street in the area called Holbeck, traces of the industrial revolution were still quite evident. John Marshall’s Temple Mills and Matthew Murray’s Round Foundry are particularly striking examples; grand imposing buildings demanding respect and appreciation for the hundreds of people who worked in squalid conditions and around the clock to ensure that the country remained well fed, clothed, or transported.
Ginestra Bianconi
- Published in print:
- 2018
- Published Online:
- July 2018
- ISBN:
- 9780198753919
- eISBN:
- 9780191815676
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/oso/9780198753919.003.0008
- Subject:
- Physics, Theoretical, Computational, and Statistical Physics
Multilayer networks have a mesoscale structure organized in multilayer communities, spanning different layers and often revealing important functional properties of the network. In this chapter the ...
More
Multilayer networks have a mesoscale structure organized in multilayer communities, spanning different layers and often revealing important functional properties of the network. In this chapter the major techniques proposed for detecting and characterizing the multilayer communities are described, including: generalized modularity, consensus clustering, multilayer infomaps, multilink communities, tensorial decomposition, Normalized Mutual Information, theta indicators. The main benefits and limitations of these approaches are discussed and revealed by analysing the results obtained on real datasets coming from sociology, technology, molecular biology and brain networks. Additionally, techniques for layer aggregation and disaggregation are here discussed. These methods are compared and commented in order to provide a general perspective on the subject.Less
Multilayer networks have a mesoscale structure organized in multilayer communities, spanning different layers and often revealing important functional properties of the network. In this chapter the major techniques proposed for detecting and characterizing the multilayer communities are described, including: generalized modularity, consensus clustering, multilayer infomaps, multilink communities, tensorial decomposition, Normalized Mutual Information, theta indicators. The main benefits and limitations of these approaches are discussed and revealed by analysing the results obtained on real datasets coming from sociology, technology, molecular biology and brain networks. Additionally, techniques for layer aggregation and disaggregation are here discussed. These methods are compared and commented in order to provide a general perspective on the subject.
Vlatko Vedral
- Published in print:
- 2018
- Published Online:
- November 2020
- ISBN:
- 9780198815433
- eISBN:
- 9780191917240
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/oso/9780198815433.003.0016
- Subject:
- Computer Science, Mathematical Theory of Computation
Who hasn’t heard of a computer? In a society entirely dominated by these transistor infested boxes there are probably only a few remaining isolated tribes in the Amazon or around the Kalahari that ...
More
Who hasn’t heard of a computer? In a society entirely dominated by these transistor infested boxes there are probably only a few remaining isolated tribes in the Amazon or around the Kalahari that have not been affected. From organizing our finances, flying a plane, warming up food, controlling our heartbeat (for some), these devices are prevalent in each and every aspect of our society. Whether we are talking about personal computers, mainframe computers, or the embedded computers that we find in our mobile phones or microwave ovens, it is very hard to even imagine a world without them. The term computer, however, means more than just your average Apple Mac or PC. A computer, at its most basic level, is any object that can take instructions, and perform computations based on those instructions. In this sense computation is not limited to a machine or mechanical apparatus; atomic physical phenomena or living organisms are also perfectly valid forms of computers (and in many cases far more powerful than what we can achieve through current models). We’ll discuss alternative models of computation later in this chapter. Computers come in a variety of shapes and sizes and some are not always identifiable as computers at all (would you consider your fridge a computer?). Some are capable of doing millions of calculations in a single second, while others may take long periods of time to do even the most simple calculations. But theoretically, anything one computer is capable of doing, another computer is also capable of doing. Given the right instructions, and sufficient memory, the computer found in your fridge could, for example, simulate Microsoft Windows. The fact that it might be ridiculous to waste time using the embedded computer in your fridge to do anything other than what it was designed for is irrelevant – the point is that it obeys the same model of computation as every other computer and can therefore – by hook or by crook – eventually achieve the same result. This notion is based on what is now called the Church–Turing thesis (dating back to 1936), a hypothesis about the nature of mechanical calculation devices, such as electronic computers.Less
Who hasn’t heard of a computer? In a society entirely dominated by these transistor infested boxes there are probably only a few remaining isolated tribes in the Amazon or around the Kalahari that have not been affected. From organizing our finances, flying a plane, warming up food, controlling our heartbeat (for some), these devices are prevalent in each and every aspect of our society. Whether we are talking about personal computers, mainframe computers, or the embedded computers that we find in our mobile phones or microwave ovens, it is very hard to even imagine a world without them. The term computer, however, means more than just your average Apple Mac or PC. A computer, at its most basic level, is any object that can take instructions, and perform computations based on those instructions. In this sense computation is not limited to a machine or mechanical apparatus; atomic physical phenomena or living organisms are also perfectly valid forms of computers (and in many cases far more powerful than what we can achieve through current models). We’ll discuss alternative models of computation later in this chapter. Computers come in a variety of shapes and sizes and some are not always identifiable as computers at all (would you consider your fridge a computer?). Some are capable of doing millions of calculations in a single second, while others may take long periods of time to do even the most simple calculations. But theoretically, anything one computer is capable of doing, another computer is also capable of doing. Given the right instructions, and sufficient memory, the computer found in your fridge could, for example, simulate Microsoft Windows. The fact that it might be ridiculous to waste time using the embedded computer in your fridge to do anything other than what it was designed for is irrelevant – the point is that it obeys the same model of computation as every other computer and can therefore – by hook or by crook – eventually achieve the same result. This notion is based on what is now called the Church–Turing thesis (dating back to 1936), a hypothesis about the nature of mechanical calculation devices, such as electronic computers.