Michael A. Carrier
- Published in print:
- 2009
- Published Online:
- May 2009
- ISBN:
- 9780195342581
- eISBN:
- 9780199867035
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780195342581.003.0014
- Subject:
- Law, Intellectual Property, IT, and Media Law
A standard is a common platform that allows products to work together. This chapter begins by describing the various types of standard-setting. It then offers a brief history of antitrust treatment ...
More
A standard is a common platform that allows products to work together. This chapter begins by describing the various types of standard-setting. It then offers a brief history of antitrust treatment of standard-setting organizations (SSOs). It sets forth the anticompetitive concerns of standard-setting, including the harm posed by buyer power known as monopsony. It compares these concerns to the significant procompetitive benefits that SSOs and their IP rules offer. It concludes by calling for Rule-of-Reason analysis for SSOs.Less
A standard is a common platform that allows products to work together. This chapter begins by describing the various types of standard-setting. It then offers a brief history of antitrust treatment of standard-setting organizations (SSOs). It sets forth the anticompetitive concerns of standard-setting, including the harm posed by buyer power known as monopsony. It compares these concerns to the significant procompetitive benefits that SSOs and their IP rules offer. It concludes by calling for Rule-of-Reason analysis for SSOs.
Robert Arp, Barry Smith, and Andrew D. Spear
- Published in print:
- 2015
- Published Online:
- May 2016
- ISBN:
- 9780262527811
- eISBN:
- 9780262329583
- Item type:
- book
- Publisher:
- The MIT Press
- DOI:
- 10.7551/mitpress/9780262527811.001.0001
- Subject:
- Philosophy, General
The potential of information-driven disciplines such as biology and clinical science can be realized only if those involved in the production and analysis of data can successfully build upon each ...
More
The potential of information-driven disciplines such as biology and clinical science can be realized only if those involved in the production and analysis of data can successfully build upon each other’s work. The quantity and heterogeneity of the data being produced raises challenges to this goal, and so also does the tendency of different communities to describe their data in different, sometimes ad hoc, ways. If computers are effectively to exploit the results of scientific research and enable interoperability among diverse data repositories, then a strategy is needed to counteract such tendencies to data-silo formation. The use of common, consensus-based, controlled vocabularies to tag or describe data is one such strategy. Applied ontology is the discipline which creates, evaluates, and applies such common vocabularies – called ‘ontologies’ – a discipline which involves contributions from philosophers, logicians, and computer scientists, working with researchers in specific scientific disciplines as well as with users and creators of data in extra-scientific fields. Ontologies provide not merely common terms, but also definitions of these terms expressed in a formal language to allow processing by computers. The book describes the concrete steps involved in building and using ontologies for purposes of tagging data. It documents principles of best practice and provides examples of different sorts of errors to be avoided. It also provides an introduction to a specific top-level ontology, the Basic Formal Ontology (BFO), and to the computational resources used in building and applying ontologies, including the Ontology Web Language (OWL).Less
The potential of information-driven disciplines such as biology and clinical science can be realized only if those involved in the production and analysis of data can successfully build upon each other’s work. The quantity and heterogeneity of the data being produced raises challenges to this goal, and so also does the tendency of different communities to describe their data in different, sometimes ad hoc, ways. If computers are effectively to exploit the results of scientific research and enable interoperability among diverse data repositories, then a strategy is needed to counteract such tendencies to data-silo formation. The use of common, consensus-based, controlled vocabularies to tag or describe data is one such strategy. Applied ontology is the discipline which creates, evaluates, and applies such common vocabularies – called ‘ontologies’ – a discipline which involves contributions from philosophers, logicians, and computer scientists, working with researchers in specific scientific disciplines as well as with users and creators of data in extra-scientific fields. Ontologies provide not merely common terms, but also definitions of these terms expressed in a formal language to allow processing by computers. The book describes the concrete steps involved in building and using ontologies for purposes of tagging data. It documents principles of best practice and provides examples of different sorts of errors to be avoided. It also provides an introduction to a specific top-level ontology, the Basic Formal Ontology (BFO), and to the computational resources used in building and applying ontologies, including the Ontology Web Language (OWL).
Robin Mansell and W. Edward Steinmueller
- Published in print:
- 1993
- Published Online:
- October 2011
- ISBN:
- 9780198295570
- eISBN:
- 9780191685149
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780198295570.003.0004
- Subject:
- Business and Management, Information Technology, Innovation
The evolution of certain associated technologies, specifically the notion of ‘convergence’, has significant impacts on the nature of and the conditions for user access to communication and ...
More
The evolution of certain associated technologies, specifically the notion of ‘convergence’, has significant impacts on the nature of and the conditions for user access to communication and information infrastructure. The concept of convergence can be understood in two ways: the first entails how all electronic communication signals may be potentially provided as interconnecting digital ‘bitstreams’; the second illustrates the market and industrial implications for the said technological potential. Compared to those expressed by other writers, our notion of convergence may be associated with the evolution of convergent markets, the imbalances in demand growth, and institutional problems of achieving interconnection and interoperability. In this chapter, attention is drawn to the formation of technological trajectories that manage the near-term evolution of communication and information infrastructure.Less
The evolution of certain associated technologies, specifically the notion of ‘convergence’, has significant impacts on the nature of and the conditions for user access to communication and information infrastructure. The concept of convergence can be understood in two ways: the first entails how all electronic communication signals may be potentially provided as interconnecting digital ‘bitstreams’; the second illustrates the market and industrial implications for the said technological potential. Compared to those expressed by other writers, our notion of convergence may be associated with the evolution of convergent markets, the imbalances in demand growth, and institutional problems of achieving interconnection and interoperability. In this chapter, attention is drawn to the formation of technological trajectories that manage the near-term evolution of communication and information infrastructure.
Robert Arp, Barry Smith, and Andrew D. Spear
- Published in print:
- 2015
- Published Online:
- May 2016
- ISBN:
- 9780262527811
- eISBN:
- 9780262329583
- Item type:
- chapter
- Publisher:
- The MIT Press
- DOI:
- 10.7551/mitpress/9780262527811.003.0002
- Subject:
- Philosophy, General
Philosophical ontology (also ‘metaphysics’) explores general questions concerning the nature of being; applied ontology concerns more specific questions pertaining to entities in specific domains. ...
More
Philosophical ontology (also ‘metaphysics’) explores general questions concerning the nature of being; applied ontology concerns more specific questions pertaining to entities in specific domains. Applied ontology takes over from philosophy the central role of taxonomies and each applied ontology is built around a hierarchy of types and subtypes (an is_a hierarchy). Applied ontologies are organized along the dimensions of 1. greater and lesser generality, and 2. intended use. Concerning 1. we distinguish between top-level and domain ontologies. Top-level ontologies are domain-neutral; including terms such as ‘object’ or ‘process’ having maximally general scope. Domain ontologies are domain specific (comprising terms such as ‘molecule’ or ‘catheter’). Concerning 2. we distinguish between reference and application ontologies. Reference ontologies are designed to be re-used in distinct application ontologies, themselves built to address specific needs. We address the role of these different ontologies in assisting with heterogeneous data-management and promoting interoperability among information systems.Less
Philosophical ontology (also ‘metaphysics’) explores general questions concerning the nature of being; applied ontology concerns more specific questions pertaining to entities in specific domains. Applied ontology takes over from philosophy the central role of taxonomies and each applied ontology is built around a hierarchy of types and subtypes (an is_a hierarchy). Applied ontologies are organized along the dimensions of 1. greater and lesser generality, and 2. intended use. Concerning 1. we distinguish between top-level and domain ontologies. Top-level ontologies are domain-neutral; including terms such as ‘object’ or ‘process’ having maximally general scope. Domain ontologies are domain specific (comprising terms such as ‘molecule’ or ‘catheter’). Concerning 2. we distinguish between reference and application ontologies. Reference ontologies are designed to be re-used in distinct application ontologies, themselves built to address specific needs. We address the role of these different ontologies in assisting with heterogeneous data-management and promoting interoperability among information systems.
Robert Arp, Barry Smith, and Andrew D. Spear
- Published in print:
- 2015
- Published Online:
- May 2016
- ISBN:
- 9780262527811
- eISBN:
- 9780262329583
- Item type:
- chapter
- Publisher:
- The MIT Press
- DOI:
- 10.7551/mitpress/9780262527811.003.0008
- Subject:
- Philosophy, General
We discuss the interplay between applied ontology and the use of web resources in scientific and other domains, and provide an account of how ontologies are implemented computationally. We provide an ...
More
We discuss the interplay between applied ontology and the use of web resources in scientific and other domains, and provide an account of how ontologies are implemented computationally. We provide an introduction to the Protégé Ontology Editor, the Semantic Web, the Resource Description Framework (RDF) and the Web Ontology Language (OWL). We illustrated how BFO is used to provide the common architecture for specific domain ontologies, including the Ontology for General Medical Science (OGMS), the Infectious Disease Ontology (IDO), the Information Artifact Ontology (IAO), and the Emotion Ontology (MFO-EM). Before terms and relations provide the starting point for the creation of definition trees in such ontologies according to the Aristotelian strategy for authoring of definitions outlined in Chapter 4. We conclude with a discussion of the role of a top-level ontology such as BFO in facilitating semantic interoperability.Less
We discuss the interplay between applied ontology and the use of web resources in scientific and other domains, and provide an account of how ontologies are implemented computationally. We provide an introduction to the Protégé Ontology Editor, the Semantic Web, the Resource Description Framework (RDF) and the Web Ontology Language (OWL). We illustrated how BFO is used to provide the common architecture for specific domain ontologies, including the Ontology for General Medical Science (OGMS), the Infectious Disease Ontology (IDO), the Information Artifact Ontology (IAO), and the Emotion Ontology (MFO-EM). Before terms and relations provide the starting point for the creation of definition trees in such ontologies according to the Aristotelian strategy for authoring of definitions outlined in Chapter 4. We conclude with a discussion of the role of a top-level ontology such as BFO in facilitating semantic interoperability.
Laura DeNardis
- Published in print:
- 2020
- Published Online:
- May 2020
- ISBN:
- 9780300233070
- eISBN:
- 9780300249330
- Item type:
- chapter
- Publisher:
- Yale University Press
- DOI:
- 10.12987/yale/9780300233070.003.0005
- Subject:
- Political Science, Public Policy
This chapter assesses how technical standardization faces unique challenges. Embedded objects require high security but are also constrained architectures that demand lower energy consumption and ...
More
This chapter assesses how technical standardization faces unique challenges. Embedded objects require high security but are also constrained architectures that demand lower energy consumption and restricted processing power. The current state of interoperability is fragmented, heterogeneous, complex, and involving multiple competing standards and an expanding base of standards-setting organizations. Unlike traditional communication systems that require universality, fragmentation by sector might actually have beneficial effects, such as serving as a de facto security boundary. The chapter then explains the evolution of fragmented standards in the Internet of things space but suggests that open standards and interoperability in the underlying common infrastructure are still vital for accountability, innovation, and stability.Less
This chapter assesses how technical standardization faces unique challenges. Embedded objects require high security but are also constrained architectures that demand lower energy consumption and restricted processing power. The current state of interoperability is fragmented, heterogeneous, complex, and involving multiple competing standards and an expanding base of standards-setting organizations. Unlike traditional communication systems that require universality, fragmentation by sector might actually have beneficial effects, such as serving as a de facto security boundary. The chapter then explains the evolution of fragmented standards in the Internet of things space but suggests that open standards and interoperability in the underlying common infrastructure are still vital for accountability, innovation, and stability.
Michael Metcalf, John Reid, and Malcolm Cohen
- Published in print:
- 2018
- Published Online:
- October 2018
- ISBN:
- 9780198811893
- eISBN:
- 9780191850028
- Item type:
- book
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/oso/9780198811893.001.0001
- Subject:
- Mathematics, Logic / Computer Science / Mathematical Philosophy
Fortran marches on, remaining one of the principal programming languages used in high-performance scientific, numerical, and engineering computing. A series of significant revisions to the standard ...
More
Fortran marches on, remaining one of the principal programming languages used in high-performance scientific, numerical, and engineering computing. A series of significant revisions to the standard versions of the language have progressively enhanced its capabilities, and the latest standard—Fortran 2018—includes many additions and improvements. This second edition of Modern Fortran Explained expands on the first. Given the release of updated versions of Fortran compilers, the separate descriptions of Fortran 2003 and Fortran 2008 have been incorporated into the main text, which thereby becomes a unified description of the full Fortran 2008 version of the language. This is much cleaner, many deficiencies and irregularities in the earlier language versions having been resolved. It includes object orientation and parallel processing with coarrays. Four completely new chapters describe the additional features of Fortran 2018, with its enhancements to coarrays for parallel programming, interoperability with C, IEEE arithmetic, and various other improvements. Written by leading experts in the field, two of whom have actively contributed to Fortran 2018, this is a complete and authoritative description of Fortran in its latest form. It is intended for new and existing users of the language, and for all those involved in scientific and numerical computing. It is suitable as a textbook for teaching and, with its index, as a handy reference for practitioners.Less
Fortran marches on, remaining one of the principal programming languages used in high-performance scientific, numerical, and engineering computing. A series of significant revisions to the standard versions of the language have progressively enhanced its capabilities, and the latest standard—Fortran 2018—includes many additions and improvements. This second edition of Modern Fortran Explained expands on the first. Given the release of updated versions of Fortran compilers, the separate descriptions of Fortran 2003 and Fortran 2008 have been incorporated into the main text, which thereby becomes a unified description of the full Fortran 2008 version of the language. This is much cleaner, many deficiencies and irregularities in the earlier language versions having been resolved. It includes object orientation and parallel processing with coarrays. Four completely new chapters describe the additional features of Fortran 2018, with its enhancements to coarrays for parallel programming, interoperability with C, IEEE arithmetic, and various other improvements. Written by leading experts in the field, two of whom have actively contributed to Fortran 2018, this is a complete and authoritative description of Fortran in its latest form. It is intended for new and existing users of the language, and for all those involved in scientific and numerical computing. It is suitable as a textbook for teaching and, with its index, as a handy reference for practitioners.
José van Dijck
- Published in print:
- 2013
- Published Online:
- January 2013
- ISBN:
- 9780199970773
- eISBN:
- 9780199307425
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780199970773.003.0008
- Subject:
- Sociology, Culture
The last chapter reassembles the histories of individual microsystems and identifies critical questions about the changing nature of the ecosystem and online sociality. Although each of the dissected ...
More
The last chapter reassembles the histories of individual microsystems and identifies critical questions about the changing nature of the ecosystem and online sociality. Although each of the dissected five major platforms (Facebook, Twitter, Flickr, YouTube and Wikipedia) nurses its own mechanisms, they are staked in the same values or principles: popularity, hierarchical ranking, neutrality, quick growth, large traffic volumes, fast turnovers and personalized recommendations. The cultivation of online sociality is increasingly fenced off by three major chains of platforms (Facebook, Google and Apple); these chains share some operational principles even if they differ on some ideological premises (open versus closed systems). Questioning the role of algorithms in the steering of desires and the power of users to control their data—their ability to opt out—this chapter articulates larger political and social concerns, such as the changing meaning of “social,” “public,” “community,” and “nonprofit” in an ecosystem dominated by corporate forces.Less
The last chapter reassembles the histories of individual microsystems and identifies critical questions about the changing nature of the ecosystem and online sociality. Although each of the dissected five major platforms (Facebook, Twitter, Flickr, YouTube and Wikipedia) nurses its own mechanisms, they are staked in the same values or principles: popularity, hierarchical ranking, neutrality, quick growth, large traffic volumes, fast turnovers and personalized recommendations. The cultivation of online sociality is increasingly fenced off by three major chains of platforms (Facebook, Google and Apple); these chains share some operational principles even if they differ on some ideological premises (open versus closed systems). Questioning the role of algorithms in the steering of desires and the power of users to control their data—their ability to opt out—this chapter articulates larger political and social concerns, such as the changing meaning of “social,” “public,” “community,” and “nonprofit” in an ecosystem dominated by corporate forces.
Robert W. Poole Jr.
- Published in print:
- 2018
- Published Online:
- May 2019
- ISBN:
- 9780226557571
- eISBN:
- 9780226557601
- Item type:
- chapter
- Publisher:
- University of Chicago Press
- DOI:
- 10.7208/chicago/9780226557601.003.0009
- Subject:
- Economics and Finance, Public and Welfare
The first major implementation of highway utilities is proposed to be toll-financed reconstruction and modernization of the Interstates. Long-distance routes would be converted first, urban ones ...
More
The first major implementation of highway utilities is proposed to be toll-financed reconstruction and modernization of the Interstates. Long-distance routes would be converted first, urban ones later. This would also be the first step in converting from per-gallon taxes to per-mile charges.Less
The first major implementation of highway utilities is proposed to be toll-financed reconstruction and modernization of the Interstates. Long-distance routes would be converted first, urban ones later. This would also be the first step in converting from per-gallon taxes to per-mile charges.
Robert W. Poole Jr.
- Published in print:
- 2018
- Published Online:
- May 2019
- ISBN:
- 9780226557571
- eISBN:
- 9780226557601
- Item type:
- chapter
- Publisher:
- University of Chicago Press
- DOI:
- 10.7208/chicago/9780226557601.003.0010
- Subject:
- Economics and Finance, Public and Welfare
Early plans envisioned a network of parkways, not the huge freeways that were built. This chapter suggests modifying existing freeways via elevated lanes, tunnels, deck and lids, to close gaps and ...
More
Early plans envisioned a network of parkways, not the huge freeways that were built. This chapter suggests modifying existing freeways via elevated lanes, tunnels, deck and lids, to close gaps and add some capacity without destroying any more neighborhoods. Congestion would be addressed via networks of express toll lanes. Managed arterials would supplement the revamped freeways.Less
Early plans envisioned a network of parkways, not the huge freeways that were built. This chapter suggests modifying existing freeways via elevated lanes, tunnels, deck and lids, to close gaps and add some capacity without destroying any more neighborhoods. Congestion would be addressed via networks of express toll lanes. Managed arterials would supplement the revamped freeways.
Fabien Terpan
- Published in print:
- 2007
- Published Online:
- March 2012
- ISBN:
- 9780199218622
- eISBN:
- 9780191696114
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780199218622.003.0012
- Subject:
- Law, EU Law
This chapter explores the EU-NATO relations with regard to security and defence matters. In both NATO and the EU there is a legal basis for the development of a European role in security and defence. ...
More
This chapter explores the EU-NATO relations with regard to security and defence matters. In both NATO and the EU there is a legal basis for the development of a European role in security and defence. When one looks at overall objectives and main principles, there is consistency between the EU and NATO. This does not guarantee that both organisations will evolve in the same direction: arrangements must be made in order to put this theory into practice. NATO-EU relations face three major challenges: subsidiarity, interoperability, and a more balanced partnership between European members and the United States.Less
This chapter explores the EU-NATO relations with regard to security and defence matters. In both NATO and the EU there is a legal basis for the development of a European role in security and defence. When one looks at overall objectives and main principles, there is consistency between the EU and NATO. This does not guarantee that both organisations will evolve in the same direction: arrangements must be made in order to put this theory into practice. NATO-EU relations face three major challenges: subsidiarity, interoperability, and a more balanced partnership between European members and the United States.
Ronald M. Baecker
- Published in print:
- 2019
- Published Online:
- November 2020
- ISBN:
- 9780198827085
- eISBN:
- 9780191917318
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/oso/9780198827085.003.0009
- Subject:
- Computer Science, Human-Computer Interaction
As with the chapter on learning, we begin our discussion of health applications by examining influential early visions of the possible role of computers in ...
More
As with the chapter on learning, we begin our discussion of health applications by examining influential early visions of the possible role of computers in improving health care and medicine. We then look at the great variety of roles played by current digital technologies in this field. We first consider the online availability of health information. There are two possible sources: one from respected centres of expertise, the other from consumers of medical care, that is, patients, who in working together form what may be viewed as communities of care. There is strong evidence that people are using these online medical resources to become more intelligent guardians of their own health and to support themselves when seeking help from physicians. Next, we examine the care improvements promised by personal health and electronic medical records. Progress here has been disappointingly slow; we shall discuss the mix of technical, cultural, administrative, interpersonal, and financial reasons for the sluggishness in development and deployment. Two particularly interesting cases of medical information are data dealing with adverse drug reactions and interactions, commonly known as adverse drug events (ADEs), and the use of big data and social media in epidemic surveillance and control, by which we are becoming better equipped to indicate, predict, and track outbreaks of disease. Computers have made a huge impact on medical education through the development of human body simulators. There also continue to be more and more advanced uses of technology embedded within the human body, either to augment the functioning of organs or to replace body parts that no longer work, which could possibly result in bionic people or androids in the future. We shall present some examples indicating the pace at which these technologies are developing. Recent advances in understanding the human genome have enabled a new form of medicine called precision medicine. The goal is to use genetic screening of patients to enable more specific treatments than were hitherto possible. Precision medicine also enables what some call designer babies. We shall introduce policy and ethical issues raised by this concept.
Less
As with the chapter on learning, we begin our discussion of health applications by examining influential early visions of the possible role of computers in improving health care and medicine. We then look at the great variety of roles played by current digital technologies in this field. We first consider the online availability of health information. There are two possible sources: one from respected centres of expertise, the other from consumers of medical care, that is, patients, who in working together form what may be viewed as communities of care. There is strong evidence that people are using these online medical resources to become more intelligent guardians of their own health and to support themselves when seeking help from physicians. Next, we examine the care improvements promised by personal health and electronic medical records. Progress here has been disappointingly slow; we shall discuss the mix of technical, cultural, administrative, interpersonal, and financial reasons for the sluggishness in development and deployment. Two particularly interesting cases of medical information are data dealing with adverse drug reactions and interactions, commonly known as adverse drug events (ADEs), and the use of big data and social media in epidemic surveillance and control, by which we are becoming better equipped to indicate, predict, and track outbreaks of disease. Computers have made a huge impact on medical education through the development of human body simulators. There also continue to be more and more advanced uses of technology embedded within the human body, either to augment the functioning of organs or to replace body parts that no longer work, which could possibly result in bionic people or androids in the future. We shall present some examples indicating the pace at which these technologies are developing. Recent advances in understanding the human genome have enabled a new form of medicine called precision medicine. The goal is to use genetic screening of patients to enable more specific treatments than were hitherto possible. Precision medicine also enables what some call designer babies. We shall introduce policy and ethical issues raised by this concept.
Alan W. Brown, David J. Carney, Edwin J. Morris, Dennis B. Smith, and Paul F. Zarrella
- Published in print:
- 1994
- Published Online:
- November 2020
- ISBN:
- 9780195094787
- eISBN:
- 9780197560785
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/oso/9780195094787.003.0004
- Subject:
- Computer Science, Software Engineering
Computers have a significant impact on almost every aspect of our lives. Computer systems are used as integral components in the design of many of the ...
More
Computers have a significant impact on almost every aspect of our lives. Computer systems are used as integral components in the design of many of the artifacts we use and the homes we inhabit. They also control the operation of a number of devices we frequently use, and record information on many of the significant actions we take in our daily lives. The rapid increases in performance and reliability of computer hardware, coupled with dramatic decreases in their size and cost, have resulted in an explosion of uses of computer technology in a wide variety of application domains. A consequence of this trend is that computer software is in great demand. In addition to new software being written, many millions of lines of existing software are in daily use, and require constant maintenance and upgrade. As a result, computer software is very often the overriding factor in a system’s costs, reliability, performance, and usability. Software that is poorly designed, implemented, and maintained is a major problem for many companies that make use of computer systems. These facts have led to increasing attention being placed on the processes by which software is developed and maintained, and on the computer-based technology that supports these activities. Over the past decade or more, this attention has focused on understanding better how software can be produced and evolved, and on providing automated support for these processes where appropriate. One of the consequences of this attention has been the development of the field of computer-aided software engineering (CASE), which directly addresses the needs of software engineers themselves in the use of computerbased technology to support their own development and maintenance activities. The promise of CASE is that automated support for some aspects of software development and maintenance will: • increase productivity and reduce the cost of software development, • improve the quality (e.g., reliability, usability, performance) of software products, • keep documentation in step with software products as they evolve, • facilitate maintenance of existing software systems, and • make the software engineers’ task less odious and more enjoyable.
Less
Computers have a significant impact on almost every aspect of our lives. Computer systems are used as integral components in the design of many of the artifacts we use and the homes we inhabit. They also control the operation of a number of devices we frequently use, and record information on many of the significant actions we take in our daily lives. The rapid increases in performance and reliability of computer hardware, coupled with dramatic decreases in their size and cost, have resulted in an explosion of uses of computer technology in a wide variety of application domains. A consequence of this trend is that computer software is in great demand. In addition to new software being written, many millions of lines of existing software are in daily use, and require constant maintenance and upgrade. As a result, computer software is very often the overriding factor in a system’s costs, reliability, performance, and usability. Software that is poorly designed, implemented, and maintained is a major problem for many companies that make use of computer systems. These facts have led to increasing attention being placed on the processes by which software is developed and maintained, and on the computer-based technology that supports these activities. Over the past decade or more, this attention has focused on understanding better how software can be produced and evolved, and on providing automated support for these processes where appropriate. One of the consequences of this attention has been the development of the field of computer-aided software engineering (CASE), which directly addresses the needs of software engineers themselves in the use of computerbased technology to support their own development and maintenance activities. The promise of CASE is that automated support for some aspects of software development and maintenance will: • increase productivity and reduce the cost of software development, • improve the quality (e.g., reliability, usability, performance) of software products, • keep documentation in step with software products as they evolve, • facilitate maintenance of existing software systems, and • make the software engineers’ task less odious and more enjoyable.
Daniel G. Brown and Gregory Elmes
- Published in print:
- 2004
- Published Online:
- November 2020
- ISBN:
- 9780198233923
- eISBN:
- 9780191917707
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/oso/9780198233923.003.0036
- Subject:
- Earth Sciences and Geography, Regional Geography
The role of GIS within the discipline of geography, not to mention its role within the daily operation of a very large range of human enterprises in the ...
More
The role of GIS within the discipline of geography, not to mention its role within the daily operation of a very large range of human enterprises in the developed world, has undergone major changes in the decade-plus since the first edition of Geography in America (Gaile and Willmott 1989) was published. Not the least of these major changes is an important redefinition of the acronym. In 1989, GIS meant only “geographic information systems” and referred to an immature but rapidly developing technology. Today, many geographers make an emphatic distinction between the technology (GISystems or GIS) and the science behind the technology (GIScience or GISci). This important transition from a focus on the technology to a focus on the farranging theoretical underpinnings of the technology and its use are clearly reflected in the research progress made in this field in the past decade. This chapter highlights some of the significant aspects of this diverse research and its related impacts on education and institutions. The chapter focuses on the work of North American geographers, though reference to the work of others is unavoidable. We recognize the many and increasing contributions of our colleagues in other disciplines and overseas to the development of GISci, but focus our attention to the scope defined by the present volume. The chapter closes with speculations on the future of GIS in geography in America in the coming decade. In the late 1980s, geographic information systems (GIS) were large stand-alone software and information systems being applied to a growing range of application areas. Today GIS are well integrated into the normal operations of a large range of industries as diverse as forestry, health care delivery, retail marketing, and city planning. Developments in the capabilities of and access to GIS technology during the past decade have paralleled developments in the computer industry as a whole. Similarly, academic research into the fundamental concepts and theories that underlie GIS has matured and become better connected across multiple disciplines. Drawing on fields as diverse as computer science, cognitive science, statistics, decision science, surveying, remote sensing, and social theory, “geographic information science” (Goodchild 1992b) has emerged as an important synthesizing influence during the 1990s.
Less
The role of GIS within the discipline of geography, not to mention its role within the daily operation of a very large range of human enterprises in the developed world, has undergone major changes in the decade-plus since the first edition of Geography in America (Gaile and Willmott 1989) was published. Not the least of these major changes is an important redefinition of the acronym. In 1989, GIS meant only “geographic information systems” and referred to an immature but rapidly developing technology. Today, many geographers make an emphatic distinction between the technology (GISystems or GIS) and the science behind the technology (GIScience or GISci). This important transition from a focus on the technology to a focus on the farranging theoretical underpinnings of the technology and its use are clearly reflected in the research progress made in this field in the past decade. This chapter highlights some of the significant aspects of this diverse research and its related impacts on education and institutions. The chapter focuses on the work of North American geographers, though reference to the work of others is unavoidable. We recognize the many and increasing contributions of our colleagues in other disciplines and overseas to the development of GISci, but focus our attention to the scope defined by the present volume. The chapter closes with speculations on the future of GIS in geography in America in the coming decade. In the late 1980s, geographic information systems (GIS) were large stand-alone software and information systems being applied to a growing range of application areas. Today GIS are well integrated into the normal operations of a large range of industries as diverse as forestry, health care delivery, retail marketing, and city planning. Developments in the capabilities of and access to GIS technology during the past decade have paralleled developments in the computer industry as a whole. Similarly, academic research into the fundamental concepts and theories that underlie GIS has matured and become better connected across multiple disciplines. Drawing on fields as diverse as computer science, cognitive science, statistics, decision science, surveying, remote sensing, and social theory, “geographic information science” (Goodchild 1992b) has emerged as an important synthesizing influence during the 1990s.
Ian Walden and Laíse Da Correggio Luciano
- Published in print:
- 2013
- Published Online:
- January 2014
- ISBN:
- 9780199671670
- eISBN:
- 9780191767463
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780199671670.003.0012
- Subject:
- Law, Intellectual Property, IT, and Media Law
Competition law issues are emerging in relation to cloud computing, notwithstanding the relative immaturity of the sector. This chapter analyses the potential application to the cloud sector of EU ...
More
Competition law issues are emerging in relation to cloud computing, notwithstanding the relative immaturity of the sector. This chapter analyses the potential application to the cloud sector of EU competition rules governing anti-competitive agreements and abuses of a dominant position. Specific issues addressed include the development of standards, the potential importance of interoperability between cloud computing services, and the impact of restrictions on data portability. The suitability of existing competition rules as a regulatory mechanism for cloud computing is also considered.Less
Competition law issues are emerging in relation to cloud computing, notwithstanding the relative immaturity of the sector. This chapter analyses the potential application to the cloud sector of EU competition rules governing anti-competitive agreements and abuses of a dominant position. Specific issues addressed include the development of standards, the potential importance of interoperability between cloud computing services, and the impact of restrictions on data portability. The suitability of existing competition rules as a regulatory mechanism for cloud computing is also considered.
Matthias C. Kettemann
- Published in print:
- 2020
- Published Online:
- February 2021
- ISBN:
- 9780198865995
- eISBN:
- 9780191898907
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/oso/9780198865995.003.0005
- Subject:
- Law, Intellectual Property, IT, and Media Law
Chapter 5 shows the potential of theoretical approaches to solving the normative crisis on the internet. In turn, key theories of order in the broader sense are presented and discussed. Though the ...
More
Chapter 5 shows the potential of theoretical approaches to solving the normative crisis on the internet. In turn, key theories of order in the broader sense are presented and discussed. Though the majority of these theories were not posited with a view to the internet, the present study draws from their epistemic potential for the regulation of the internet. Theories (and key representatives of that theory) include systems theory (Luhmann/Teubner), constitutionalization/global constitutionalism (Pernice), transnationalism (Viellechner, Calliess), legal pluralism (Seinecke), multinormativity (Forst), network theory (Vesting), interoperability theory (Palfrey, Gasser, Weber), massive online micro justice (De Werra), conflict studies (Mueller), and infrastructuralization (DeNardis). Further, the study assesses the historically sedimented discourses on internet governance and their influence on ordering the internet as well as more recent attempts to “define online norms.”Less
Chapter 5 shows the potential of theoretical approaches to solving the normative crisis on the internet. In turn, key theories of order in the broader sense are presented and discussed. Though the majority of these theories were not posited with a view to the internet, the present study draws from their epistemic potential for the regulation of the internet. Theories (and key representatives of that theory) include systems theory (Luhmann/Teubner), constitutionalization/global constitutionalism (Pernice), transnationalism (Viellechner, Calliess), legal pluralism (Seinecke), multinormativity (Forst), network theory (Vesting), interoperability theory (Palfrey, Gasser, Weber), massive online micro justice (De Werra), conflict studies (Mueller), and infrastructuralization (DeNardis). Further, the study assesses the historically sedimented discourses on internet governance and their influence on ordering the internet as well as more recent attempts to “define online norms.”
Nayan B. Ruparelia
- Published in print:
- 2016
- Published Online:
- September 2016
- ISBN:
- 9780262529099
- eISBN:
- 9780262334129
- Item type:
- chapter
- Publisher:
- The MIT Press
- DOI:
- 10.7551/mitpress/9780262529099.003.0010
- Subject:
- Computer Science, Programming Languages
Once you have decided that you want to use cloud computing, you need to assess how you will transition your current use of IT, the concommitant processes and services to the cloud. Additionally, you ...
More
Once you have decided that you want to use cloud computing, you need to assess how you will transition your current use of IT, the concommitant processes and services to the cloud. Additionally, you will need to address issues related to interoperability between your current, or legacy, IT services and cloud services. This chapter helps you to assess suitable cloud services and their vendors.The contractual agreement formed between a user and a provider of cloud services is considered in terms of service level agreements (SLAs). This chapter examines SLAs and metrics that are relevant users and a checklist is provided as a template for selecting an appropriate cloud service. A cloud maturity model is derived from a user’s perspective and it is then used to assess your level of cloud adoption with best-practices and also to allow you to track your cloud adoption progress over time.Less
Once you have decided that you want to use cloud computing, you need to assess how you will transition your current use of IT, the concommitant processes and services to the cloud. Additionally, you will need to address issues related to interoperability between your current, or legacy, IT services and cloud services. This chapter helps you to assess suitable cloud services and their vendors.The contractual agreement formed between a user and a provider of cloud services is considered in terms of service level agreements (SLAs). This chapter examines SLAs and metrics that are relevant users and a checklist is provided as a template for selecting an appropriate cloud service. A cloud maturity model is derived from a user’s perspective and it is then used to assess your level of cloud adoption with best-practices and also to allow you to track your cloud adoption progress over time.
Jonathan Band and Masanobu Katoh
- Published in print:
- 2011
- Published Online:
- August 2013
- ISBN:
- 9780262015004
- eISBN:
- 9780262295543
- Item type:
- chapter
- Publisher:
- The MIT Press
- DOI:
- 10.7551/mitpress/9780262015004.003.0001
- Subject:
- Information Science, Information Science
This chapter examines the interoperability debate in the European Union and the U.S. before 1995. It considers the issues concerning the unprotectability of interface specifications and the ...
More
This chapter examines the interoperability debate in the European Union and the U.S. before 1995. It considers the issues concerning the unprotectability of interface specifications and the permissibility of reverse engineering for computer software. It reviews several relevant cases including Whelan v. Jaslow, Computer Associates v. Altai, and Atari v. Nintendo and Sega v. Accolade. It argues that the triumph of interoperability will benefit both the information technology industry and computer users around the world.Less
This chapter examines the interoperability debate in the European Union and the U.S. before 1995. It considers the issues concerning the unprotectability of interface specifications and the permissibility of reverse engineering for computer software. It reviews several relevant cases including Whelan v. Jaslow, Computer Associates v. Altai, and Atari v. Nintendo and Sega v. Accolade. It argues that the triumph of interoperability will benefit both the information technology industry and computer users around the world.
Jonathan Band and Masanobu Katoh
- Published in print:
- 2011
- Published Online:
- August 2013
- ISBN:
- 9780262015004
- eISBN:
- 9780262295543
- Item type:
- chapter
- Publisher:
- The MIT Press
- DOI:
- 10.7551/mitpress/9780262015004.003.0002
- Subject:
- Information Science, Information Science
This chapter examines copyright cases in U.S. relevant to the interoperability debate. These cases include Lotus v. Borland, Bateman v. Mnemonics, and Softel v. Dragon Medical. This chapter analyzes ...
More
This chapter examines copyright cases in U.S. relevant to the interoperability debate. These cases include Lotus v. Borland, Bateman v. Mnemonics, and Softel v. Dragon Medical. This chapter analyzes the case law concerning the protectability of program elements necessary for interoperability and the copyright concern of software developers about the potential liability for infringements committed in the process of uncovering these elements.Less
This chapter examines copyright cases in U.S. relevant to the interoperability debate. These cases include Lotus v. Borland, Bateman v. Mnemonics, and Softel v. Dragon Medical. This chapter analyzes the case law concerning the protectability of program elements necessary for interoperability and the copyright concern of software developers about the potential liability for infringements committed in the process of uncovering these elements.
Jonathan Band and Masanobu Katoh
- Published in print:
- 2011
- Published Online:
- August 2013
- ISBN:
- 9780262015004
- eISBN:
- 9780262295543
- Item type:
- chapter
- Publisher:
- The MIT Press
- DOI:
- 10.7551/mitpress/9780262015004.003.0003
- Subject:
- Information Science, Information Science
This chapter examines the legislative history of the interoperability exception in the Digital Millennium Copyright Act (DMCA) and the interoperability cases decided under the DMCA in the U.S. It ...
More
This chapter examines the legislative history of the interoperability exception in the Digital Millennium Copyright Act (DMCA) and the interoperability cases decided under the DMCA in the U.S. It explains the provisions of Section 1201 of DMCA and the reverse engineering exception in Section 1201(f) that relates to interoperability. It suggests that the court decision on Chamberlain v. Skylink was a major development in DMCA jurisprudence because it prevented the DMCA from being employed to prevent legitimate competition in aftermarkets by requiring a nexus between the circumvention of access controls and infringement.Less
This chapter examines the legislative history of the interoperability exception in the Digital Millennium Copyright Act (DMCA) and the interoperability cases decided under the DMCA in the U.S. It explains the provisions of Section 1201 of DMCA and the reverse engineering exception in Section 1201(f) that relates to interoperability. It suggests that the court decision on Chamberlain v. Skylink was a major development in DMCA jurisprudence because it prevented the DMCA from being employed to prevent legitimate competition in aftermarkets by requiring a nexus between the circumvention of access controls and infringement.