Claus Huitfeldt
- Published in print:
- 2014
- Published Online:
- April 2017
- ISBN:
- 9780252038402
- eISBN:
- 9780252096280
- Item type:
- chapter
- Publisher:
- University of Illinois Press
- DOI:
- 10.5406/illinois/9780252038402.003.0006
- Subject:
- Literature, Criticism/Theory
This chapter describes how digital critical edition supposes a mastery of markup systems, providing an overview in the form of an inventory of standards, and of markup, presentation, and archiving ...
More
This chapter describes how digital critical edition supposes a mastery of markup systems, providing an overview in the form of an inventory of standards, and of markup, presentation, and archiving techniques. It discusses the state of the art while focusing on key architectures and techniques considered as the basis of digital critical edition. The chapter introduces some aspects of markup technology that are particularly relevant to textual scholarship, such as the Extensible Markup Language (XML) and the Text Encoding Initiative (TEI), and considers some of their limitations, possibilities, and future potential. Since there is no need to be conversant with all aspects and details of the markup technology, most of what is covered here is of a general nature, albeit focusing on issues assumed to be of particular relevance for textual scholarship.Less
This chapter describes how digital critical edition supposes a mastery of markup systems, providing an overview in the form of an inventory of standards, and of markup, presentation, and archiving techniques. It discusses the state of the art while focusing on key architectures and techniques considered as the basis of digital critical edition. The chapter introduces some aspects of markup technology that are particularly relevant to textual scholarship, such as the Extensible Markup Language (XML) and the Text Encoding Initiative (TEI), and considers some of their limitations, possibilities, and future potential. Since there is no need to be conversant with all aspects and details of the markup technology, most of what is covered here is of a general nature, albeit focusing on issues assumed to be of particular relevance for textual scholarship.
Andrew Finney, Michael Hucka, Benjamin J. Bornstein, Sarah M. Keating, Bruce E. Shapiro, Joanne Matthews, Ben L. Kovitz, Maria J. Schilstra, Akira Funahashi, John Doyle, and Hiroaki Kitano
- Published in print:
- 2006
- Published Online:
- August 2013
- ISBN:
- 9780262195485
- eISBN:
- 9780262257060
- Item type:
- chapter
- Publisher:
- The MIT Press
- DOI:
- 10.7551/mitpress/9780262195485.003.0017
- Subject:
- Mathematics, Mathematical Biology
This chapter describes Systems Biology Markup Language (SBML), a format for representing models in a way that can be used by different software systems to communicate and exchange those models. By ...
More
This chapter describes Systems Biology Markup Language (SBML), a format for representing models in a way that can be used by different software systems to communicate and exchange those models. By supporting SBML as an input and output format, different software tools can all operate on an identical representation of a model, removing opportunities for errors in translation and assuring a common starting point for analyses and simulations. The chapter also discusses some of the resources available for working with SBML as well as ongoing efforts in SBML’s continuing evolution.Less
This chapter describes Systems Biology Markup Language (SBML), a format for representing models in a way that can be used by different software systems to communicate and exchange those models. By supporting SBML as an input and output format, different software tools can all operate on an identical representation of a model, removing opportunities for errors in translation and assuring a common starting point for analyses and simulations. The chapter also discusses some of the resources available for working with SBML as well as ongoing efforts in SBML’s continuing evolution.
Francis X. Blouin Jr. and William G. Rosenberg
- Published in print:
- 2011
- Published Online:
- May 2011
- ISBN:
- 9780199740543
- eISBN:
- 9780199894673
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780199740543.003.0004
- Subject:
- History, Historiography, History of Ideas
While it is obvious that the development of new information technologies has revolutionized communication, their effect on archives has been complicated and in some ways quite problematic. This ...
More
While it is obvious that the development of new information technologies has revolutionized communication, their effect on archives has been complicated and in some ways quite problematic. This chapter begins a discussion (continued in Chapter 10) by showing how emerging information technologies opened new possibilities for archives that required a radical change in archival training and management. Tracing the initial steps toward the development of on-line access systems, and then examining in some detail the implications for archives of born digital records, it discusses the problems of defining attributes of digital documents in comparison to those that are paper based, The chapter reviews the archivists’ “appraisal debates” and explains how archivists have marginalized historiographical authorities in favor of conceptualizations drawn solely from what was now called archival theory and from a broader sense of mission.Less
While it is obvious that the development of new information technologies has revolutionized communication, their effect on archives has been complicated and in some ways quite problematic. This chapter begins a discussion (continued in Chapter 10) by showing how emerging information technologies opened new possibilities for archives that required a radical change in archival training and management. Tracing the initial steps toward the development of on-line access systems, and then examining in some detail the implications for archives of born digital records, it discusses the problems of defining attributes of digital documents in comparison to those that are paper based, The chapter reviews the archivists’ “appraisal debates” and explains how archivists have marginalized historiographical authorities in favor of conceptualizations drawn solely from what was now called archival theory and from a broader sense of mission.
Alison Harcourt, George Christou, and Seamus Simpson
- Published in print:
- 2020
- Published Online:
- April 2020
- ISBN:
- 9780198841524
- eISBN:
- 9780191877001
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/oso/9780198841524.003.0005
- Subject:
- Law, Intellectual Property, IT, and Media Law
This chapter explains one of the most important components of the web: the development and standardization of Hypertext Markup Language (HTML) and DOM (Document Object Model) which are used for ...
More
This chapter explains one of the most important components of the web: the development and standardization of Hypertext Markup Language (HTML) and DOM (Document Object Model) which are used for creating web pages and applications. In 1994, Tim Berners-Lee established the World Wide Web consortium (W3C) to work on HTML development. In 1995, the W3C decided to introduce a new standard, WHTML 2.0. However, it was incompatible with the older HTML/WHTML versions. This led to the establishment of Web Hypertext Application Technology Working Group (WHATWG) which worked externally to the W3C. WHATWG developed HTML5 which was adopted by the major browser developers Google, Opera, Mozilla, IBM, Microsoft, and Apple. For this reason, the W3C decided to work on HTML5, leading to a joint WHATWG/W3C working group. This chapter explains the development of HTML and WHATWG’s Living Standard with explanation of ongoing splits and agreements between the two fora. It explains how this division of labour led to W3C focus on the main areas of web architecture, the semantic web, the web of devices, payments applications, and web and television (TV) standards. This has led to the spillover of work to the W3C from the national sphere, notably in the development of copyright protection for TV streaming.Less
This chapter explains one of the most important components of the web: the development and standardization of Hypertext Markup Language (HTML) and DOM (Document Object Model) which are used for creating web pages and applications. In 1994, Tim Berners-Lee established the World Wide Web consortium (W3C) to work on HTML development. In 1995, the W3C decided to introduce a new standard, WHTML 2.0. However, it was incompatible with the older HTML/WHTML versions. This led to the establishment of Web Hypertext Application Technology Working Group (WHATWG) which worked externally to the W3C. WHATWG developed HTML5 which was adopted by the major browser developers Google, Opera, Mozilla, IBM, Microsoft, and Apple. For this reason, the W3C decided to work on HTML5, leading to a joint WHATWG/W3C working group. This chapter explains the development of HTML and WHATWG’s Living Standard with explanation of ongoing splits and agreements between the two fora. It explains how this division of labour led to W3C focus on the main areas of web architecture, the semantic web, the web of devices, payments applications, and web and television (TV) standards. This has led to the spillover of work to the W3C from the national sphere, notably in the development of copyright protection for TV streaming.
Adam Treister
- Published in print:
- 2005
- Published Online:
- November 2020
- ISBN:
- 9780195183146
- eISBN:
- 9780197561898
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/oso/9780195183146.003.0012
- Subject:
- Chemistry, Physical Chemistry
Flow cytometry is a result of the computer revolution. Biologists used fluorescent dyes in microscopy and medicine almost a hundred years before the first flow cytometer. Only after electronics ...
More
Flow cytometry is a result of the computer revolution. Biologists used fluorescent dyes in microscopy and medicine almost a hundred years before the first flow cytometer. Only after electronics became sophisticated enough to control individual cells and computers became fast enough to analyze the data coming out of the instrument, and to make a decision in time to deflect the stream, did cell sorting become viable. Since the 1970s, the capabilities of computers have grown exponentially. According to the famed Moore’s Law, the size of the computer, as tracked by the number of transistors on a chip, doubles every 18 months. This rule has held for three decades so far, and new technologies continue to appear to keep that growth on track. The clock speed of chips is now measured in gigahertz—billions of instructions per second—and hard drives are now available with capacities measured in terabytes. Having computers so powerful, cheap, and ubiquitous changes the nature of scientific exploration. We are in the early steps of a long march of biotechnology breakthroughs spawned from this excess of compute power. From genomics to proteomics to high-throughput flow cytometry, the trend in biological research is toward massproduced, high-volume experiments. Automation is the key to scaling their size and scope and to lowering their cost per test. Each step that was previously done by human hands is being delegated to a computer or a robot for the implementation to be more precise and to scale efficiently. From making sort decisions in milliseconds to creating data archives that may last for centuries, computers control the information involved with cytometry, and software controls the computers. As the technology matures and the size and number of exper iments increase, the emphasis of software development switches from instrument control to analysis and management. The challenge for computers is not in running the cytometer any more. The more modern challenge for informatics is to analyze, aggregate, maintain, access, and exchange the huge volume of flow cytometry data. Clinical and other regulated use of cytometry necessitates more rigorous data administration techniques. These techniques introduce issues of security, integrity, and privacy into the processing of data.
Less
Flow cytometry is a result of the computer revolution. Biologists used fluorescent dyes in microscopy and medicine almost a hundred years before the first flow cytometer. Only after electronics became sophisticated enough to control individual cells and computers became fast enough to analyze the data coming out of the instrument, and to make a decision in time to deflect the stream, did cell sorting become viable. Since the 1970s, the capabilities of computers have grown exponentially. According to the famed Moore’s Law, the size of the computer, as tracked by the number of transistors on a chip, doubles every 18 months. This rule has held for three decades so far, and new technologies continue to appear to keep that growth on track. The clock speed of chips is now measured in gigahertz—billions of instructions per second—and hard drives are now available with capacities measured in terabytes. Having computers so powerful, cheap, and ubiquitous changes the nature of scientific exploration. We are in the early steps of a long march of biotechnology breakthroughs spawned from this excess of compute power. From genomics to proteomics to high-throughput flow cytometry, the trend in biological research is toward massproduced, high-volume experiments. Automation is the key to scaling their size and scope and to lowering their cost per test. Each step that was previously done by human hands is being delegated to a computer or a robot for the implementation to be more precise and to scale efficiently. From making sort decisions in milliseconds to creating data archives that may last for centuries, computers control the information involved with cytometry, and software controls the computers. As the technology matures and the size and number of exper iments increase, the emphasis of software development switches from instrument control to analysis and management. The challenge for computers is not in running the cytometer any more. The more modern challenge for informatics is to analyze, aggregate, maintain, access, and exchange the huge volume of flow cytometry data. Clinical and other regulated use of cytometry necessitates more rigorous data administration techniques. These techniques introduce issues of security, integrity, and privacy into the processing of data.
Daniel Punday
- Published in print:
- 2015
- Published Online:
- September 2016
- ISBN:
- 9780816696994
- eISBN:
- 9781452953601
- Item type:
- chapter
- Publisher:
- University of Minnesota Press
- DOI:
- 10.5749/minnesota/9780816696994.003.0003
- Subject:
- Literature, Criticism/Theory
Chapter 3 turns from this corporate model for writing to those that embrace a more literary understanding. I begin by looking at two films that represent programming—the 1957 romantic comedy Desk Set ...
More
Chapter 3 turns from this corporate model for writing to those that embrace a more literary understanding. I begin by looking at two films that represent programming—the 1957 romantic comedy Desk Set and the 2010 film The Social Network. Where films in the past treated computers as monoliths dropped into social spaces, this later film represents programming as a form of writing. Today the lines between programming and writing are blurry, since most writing for the Web depends on markup language that contains coding. Some have argued that we would be better to treat the act of writing code as a literary activity. All in all, the professions of writing and programming have evolved to form an essential part of what has been called the “creative economy” by Richard Florida. These ideas about writing and computing are articulated in Neal Stephenson’s open-source manifesto In the Beginning … Was the Command Line. Stephenson contrasts the graphical user interface (GUI) to the textual command line. Stephenson reveals the common belief that the writing embodied in the command line or in coding represents a more fundamental layer of the computer.Less
Chapter 3 turns from this corporate model for writing to those that embrace a more literary understanding. I begin by looking at two films that represent programming—the 1957 romantic comedy Desk Set and the 2010 film The Social Network. Where films in the past treated computers as monoliths dropped into social spaces, this later film represents programming as a form of writing. Today the lines between programming and writing are blurry, since most writing for the Web depends on markup language that contains coding. Some have argued that we would be better to treat the act of writing code as a literary activity. All in all, the professions of writing and programming have evolved to form an essential part of what has been called the “creative economy” by Richard Florida. These ideas about writing and computing are articulated in Neal Stephenson’s open-source manifesto In the Beginning … Was the Command Line. Stephenson contrasts the graphical user interface (GUI) to the textual command line. Stephenson reveals the common belief that the writing embodied in the command line or in coding represents a more fundamental layer of the computer.
Jaani Riordan
- Published in print:
- 2016
- Published Online:
- March 2021
- ISBN:
- 9780198719779
- eISBN:
- 9780191927416
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/oso/9780198719779.003.0010
- Subject:
- Law, Intellectual Property, IT, and Media Law
This chapter examines the liability of internet intermediaries for contraventions of the data protection regime. Data protection duties, like those upholding rights of privacy and confidentiality, ...
More
This chapter examines the liability of internet intermediaries for contraventions of the data protection regime. Data protection duties, like those upholding rights of privacy and confidentiality, can impose significant burdens upon internet intermediaries. This is because much of the information in which these services deal will contain ‘personal data’, and in some cases sensitive personal data, while almost all of the activities undertaken by them will involve some form of ‘processing’ of those data.
Less
This chapter examines the liability of internet intermediaries for contraventions of the data protection regime. Data protection duties, like those upholding rights of privacy and confidentiality, can impose significant burdens upon internet intermediaries. This is because much of the information in which these services deal will contain ‘personal data’, and in some cases sensitive personal data, while almost all of the activities undertaken by them will involve some form of ‘processing’ of those data.