Gennady Estraikh
- Published in print:
- 1999
- Published Online:
- October 2011
- ISBN:
- 9780198184799
- eISBN:
- 9780191674365
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780198184799.003.0007
- Subject:
- Literature, European Literature
This chapter analyses some significant features and types of Soviet Yiddish word-formation. Under the influence of Russian, Soviet Yiddish widely utilized univerbalization, that is, compressing a ...
More
This chapter analyses some significant features and types of Soviet Yiddish word-formation. Under the influence of Russian, Soviet Yiddish widely utilized univerbalization, that is, compressing a phrase into one word. Four types of univerbalization are described: stump-compounds, semi-abbreviations, acronyms, and univerbs with the suffix -ke. Adjectivalization in Soviet Yiddish — a process which is, in a sense, contrary to compounding — is also discussed. As for affixation in Soviet Yiddish lexical innovations, neither new prefixes nor new affixes were created. However, the old Yiddish affixes frequently were part of new types of derivation processes or gained new semantic features. This chapter concludes by examining word-formation with the suffix -nik and verb-forms with the prefix der-.Less
This chapter analyses some significant features and types of Soviet Yiddish word-formation. Under the influence of Russian, Soviet Yiddish widely utilized univerbalization, that is, compressing a phrase into one word. Four types of univerbalization are described: stump-compounds, semi-abbreviations, acronyms, and univerbs with the suffix -ke. Adjectivalization in Soviet Yiddish — a process which is, in a sense, contrary to compounding — is also discussed. As for affixation in Soviet Yiddish lexical innovations, neither new prefixes nor new affixes were created. However, the old Yiddish affixes frequently were part of new types of derivation processes or gained new semantic features. This chapter concludes by examining word-formation with the suffix -nik and verb-forms with the prefix der-.
Bollerslev Tim
- Published in print:
- 2010
- Published Online:
- May 2010
- ISBN:
- 9780199549498
- eISBN:
- 9780191720567
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780199549498.003.0008
- Subject:
- Economics and Finance, Econometrics
This chapter provides an alternative and easy-to-use encyclopedic-type reference guide to the long list of ARCH acronyms. Comparing the length of this list to the list of general Acronyms in Time ...
More
This chapter provides an alternative and easy-to-use encyclopedic-type reference guide to the long list of ARCH acronyms. Comparing the length of this list to the list of general Acronyms in Time Series Analysis (ATSA) compiled by Granger (1983) further underscores the scope of the research efforts and new developments that have occurred in the area following the introduction of the basic linear ARCH model in Engle (1982a).Less
This chapter provides an alternative and easy-to-use encyclopedic-type reference guide to the long list of ARCH acronyms. Comparing the length of this list to the list of general Acronyms in Time Series Analysis (ATSA) compiled by Granger (1983) further underscores the scope of the research efforts and new developments that have occurred in the area following the introduction of the basic linear ARCH model in Engle (1982a).
Jeanne Fahnestock
- Published in print:
- 2011
- Published Online:
- January 2012
- ISBN:
- 9780199764129
- eISBN:
- 9780199918928
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780199764129.003.0003
- Subject:
- Linguistics, Applied Linguistics and Pedagogy
The English lexicon changes constantly as users coin new words and press existing words into new uses. Novel English words are formed by a variety of methods, including compounding existing words, ...
More
The English lexicon changes constantly as users coin new words and press existing words into new uses. Novel English words are formed by a variety of methods, including compounding existing words, adding affixes, clipping, blending, creating acronyms, and converting from one part of speech to another. Many of these word-morphing and coining options were discussed in rhetorical manuals, and understanding these methods of word formation leads to an appreciation of English morphology. Coined words are often rhetorical “hot spots”; they indicate an arguer's attempt to convey a novel content/form pairing, and they often argue for “newness” in themselves. This chapter offers examples of each form of coinage, some from arguments where the new word trenchantly delivers an argument. The chapter also covers the inevitable processes of users changing meanings over time, and of losing words as they fall out of current if not potential usage. The process of change and loss is illustrated with an extended case study of the variable meanings of the word junk, beginning with its use by Darwin in a passage from The Voyage of the Beagle where the sense is difficult to recover.Less
The English lexicon changes constantly as users coin new words and press existing words into new uses. Novel English words are formed by a variety of methods, including compounding existing words, adding affixes, clipping, blending, creating acronyms, and converting from one part of speech to another. Many of these word-morphing and coining options were discussed in rhetorical manuals, and understanding these methods of word formation leads to an appreciation of English morphology. Coined words are often rhetorical “hot spots”; they indicate an arguer's attempt to convey a novel content/form pairing, and they often argue for “newness” in themselves. This chapter offers examples of each form of coinage, some from arguments where the new word trenchantly delivers an argument. The chapter also covers the inevitable processes of users changing meanings over time, and of losing words as they fall out of current if not potential usage. The process of change and loss is illustrated with an extended case study of the variable meanings of the word junk, beginning with its use by Darwin in a passage from The Voyage of the Beagle where the sense is difficult to recover.
Jonathon Keats
- Published in print:
- 2010
- Published Online:
- November 2020
- ISBN:
- 9780195398540
- eISBN:
- 9780197562826
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/oso/9780195398540.003.0017
- Subject:
- Computer Science, Programming Languages
Developing an open-source alternative to the UNIX operating system in the early 1980s, the master hacker Richard Stallman faced a dilemma: if he put his new GNU software in the public domain, ...
More
Developing an open-source alternative to the UNIX operating system in the early 1980s, the master hacker Richard Stallman faced a dilemma: if he put his new GNU software in the public domain, people could copyright their improved versions, undermining the open-source cycle by taking away the freedoms he’d granted. So Stallman copyrighted GNU himself, and distributed it, at no cost, under a license that arguably was to have greater impact on the future of computing than even the software he was striving to protect. The GNU Emacs General Public License was the founding document of the copyleft. The word copyleft predated Stallman’s innovation by at least a couple of decades. It had been used jestingly, together with the phrase “All Rights Reversed,” in lieu of the standard copyright notice on the Principia Discordia, an absurdist countercultural religious doctrine published in the 1960s. And in the 1970s the People’s Computer Company provocatively designated Tiny BASIC, an early experiment in open-source software, “Copyleft—All Wrongs Reserved.” Either of these may have indirectly inspired Stallman’s phrasing. (He first encountered the word copyleft as a humorous slogan stamped on a letter from his fellow hacker Don Hopkins.) Stallman’s genius was to realize this vague countercultural ideal in a way that was legally enforceable. That Stallman was the one to do so, and the Discordians weren’t, makes sense when one considers his method. His license stipulated that GNU software was free to distribute, and that any aspect of it could be freely modified except the license, which would mandatorily carry over to any future version, ad infinitum, ensuring that GNU software would always be free to download and improve. “The license agreements of most software companies keep you at the mercy of those companies,” Stallman wrote in the didactic preamble to his contract. “By contrast, our general public license is intended to give everyone the right to share GNU Emacs. To make sure that you get the rights we want you to have, we need to make restrictions that forbid anyone to deny you these rights or to ask you to surrender the rights.” Freedom was paradoxically made compulsory.
Less
Developing an open-source alternative to the UNIX operating system in the early 1980s, the master hacker Richard Stallman faced a dilemma: if he put his new GNU software in the public domain, people could copyright their improved versions, undermining the open-source cycle by taking away the freedoms he’d granted. So Stallman copyrighted GNU himself, and distributed it, at no cost, under a license that arguably was to have greater impact on the future of computing than even the software he was striving to protect. The GNU Emacs General Public License was the founding document of the copyleft. The word copyleft predated Stallman’s innovation by at least a couple of decades. It had been used jestingly, together with the phrase “All Rights Reversed,” in lieu of the standard copyright notice on the Principia Discordia, an absurdist countercultural religious doctrine published in the 1960s. And in the 1970s the People’s Computer Company provocatively designated Tiny BASIC, an early experiment in open-source software, “Copyleft—All Wrongs Reserved.” Either of these may have indirectly inspired Stallman’s phrasing. (He first encountered the word copyleft as a humorous slogan stamped on a letter from his fellow hacker Don Hopkins.) Stallman’s genius was to realize this vague countercultural ideal in a way that was legally enforceable. That Stallman was the one to do so, and the Discordians weren’t, makes sense when one considers his method. His license stipulated that GNU software was free to distribute, and that any aspect of it could be freely modified except the license, which would mandatorily carry over to any future version, ad infinitum, ensuring that GNU software would always be free to download and improve. “The license agreements of most software companies keep you at the mercy of those companies,” Stallman wrote in the didactic preamble to his contract. “By contrast, our general public license is intended to give everyone the right to share GNU Emacs. To make sure that you get the rights we want you to have, we need to make restrictions that forbid anyone to deny you these rights or to ask you to surrender the rights.” Freedom was paradoxically made compulsory.
Jonathon Keats
- Published in print:
- 2010
- Published Online:
- November 2020
- ISBN:
- 9780195398540
- eISBN:
- 9780197562826
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/oso/9780195398540.003.0034
- Subject:
- Computer Science, Programming Languages
Of the many challenges facing tourism in space, one of the least obvious is the problem of intergalactic monetary exchange. Far more pressing to the nascent industry are issues such as ...
More
Of the many challenges facing tourism in space, one of the least obvious is the problem of intergalactic monetary exchange. Far more pressing to the nascent industry are issues such as extraterrestrial transportation and gravity-free accommodations. Charles Simonyi’s twelve-day trip to the International Space Station in 2007 cost him $25 million, more than the budget of an average family vacation. Yet years before even the most optimistic technophiles expect space tourism to be more than a fifteen-minute suborbital joyride on Virgin Galactic, a currency has been established, initially trading on Travelex for $12.50. It’s called the quid. Quid is an acronym for “quasi-universal intergalactic denomination.” Of course it’s also an appropriation of British slang for the pound sterling, and it is this association with the common term for a familiar item that gives it resonance, an evocative word for a provocative concept. One might have expected the new space money to repurpose the official name of an existing currency. The British and French have preferred that strategy when they’ve colonized other countries, and even Douglas Adams, for all his creativity, fell upon the formula when he coined the Altairian dollar in The Hitchhiker’s Guide to the Galaxy. But colonization robs a place of its exoticism. And if space tourism has any purpose, it’s escapism in extremis. Unlike the pound or the dollar, the quid has no inherent allegiances. The word has also been used at various stages as slang for the shilling, the sovereign, and the guinea, as well as the euro and the old Irish punt. Even the origin is “obscure,” according to the Oxford English Dictionary, which cites a characteristic early use of the word in Thomas Shadwell’s Squire of Alsatia: “Let me equip thee with a Quid.” The 1688 publication date of Shadwell’s play overrules one popular folk etymology, which claims that quid is short for Quidhampton, location of a mill that produced paper money for the Bank of England. The Bank of England wasn’t established until 1694.Less
Of the many challenges facing tourism in space, one of the least obvious is the problem of intergalactic monetary exchange. Far more pressing to the nascent industry are issues such as extraterrestrial transportation and gravity-free accommodations. Charles Simonyi’s twelve-day trip to the International Space Station in 2007 cost him $25 million, more than the budget of an average family vacation. Yet years before even the most optimistic technophiles expect space tourism to be more than a fifteen-minute suborbital joyride on Virgin Galactic, a currency has been established, initially trading on Travelex for $12.50. It’s called the quid. Quid is an acronym for “quasi-universal intergalactic denomination.” Of course it’s also an appropriation of British slang for the pound sterling, and it is this association with the common term for a familiar item that gives it resonance, an evocative word for a provocative concept. One might have expected the new space money to repurpose the official name of an existing currency. The British and French have preferred that strategy when they’ve colonized other countries, and even Douglas Adams, for all his creativity, fell upon the formula when he coined the Altairian dollar in The Hitchhiker’s Guide to the Galaxy. But colonization robs a place of its exoticism. And if space tourism has any purpose, it’s escapism in extremis. Unlike the pound or the dollar, the quid has no inherent allegiances. The word has also been used at various stages as slang for the shilling, the sovereign, and the guinea, as well as the euro and the old Irish punt. Even the origin is “obscure,” according to the Oxford English Dictionary, which cites a characteristic early use of the word in Thomas Shadwell’s Squire of Alsatia: “Let me equip thee with a Quid.” The 1688 publication date of Shadwell’s play overrules one popular folk etymology, which claims that quid is short for Quidhampton, location of a mill that produced paper money for the Bank of England. The Bank of England wasn’t established until 1694.
David Millie
- Published in print:
- 2013
- Published Online:
- May 2014
- ISBN:
- 9780813143262
- eISBN:
- 9780813144283
- Item type:
- chapter
- Publisher:
- University Press of Kentucky
- DOI:
- 10.5810/kentucky/9780813143262.003.0001
- Subject:
- History, Military History
As the Australian Army had posted the author to the Australian Army Training Team Vietnam to be an advisor, the chapter covers some of the professional and family preparations undertaken prior to ...
More
As the Australian Army had posted the author to the Australian Army Training Team Vietnam to be an advisor, the chapter covers some of the professional and family preparations undertaken prior to leaving home for war. A chronology includes some of the significant events that impacted on morale, ranging from high-level political pronouncements to the assassination of leaders. A summary of the names of characters involved in Quang Tri, and their roles, gives emphasis to the human dimension of the war. An explanation of the thematic structure of the chapters helps in the understanding of a multilayered and multinational story. The principles of war are outlined to assist the reader, as is a general description of acronyms and abbreviations listed in a glossary.Less
As the Australian Army had posted the author to the Australian Army Training Team Vietnam to be an advisor, the chapter covers some of the professional and family preparations undertaken prior to leaving home for war. A chronology includes some of the significant events that impacted on morale, ranging from high-level political pronouncements to the assassination of leaders. A summary of the names of characters involved in Quang Tri, and their roles, gives emphasis to the human dimension of the war. An explanation of the thematic structure of the chapters helps in the understanding of a multilayered and multinational story. The principles of war are outlined to assist the reader, as is a general description of acronyms and abbreviations listed in a glossary.
Deborah Nolan and Sara Stoudt
- Published in print:
- 2021
- Published Online:
- July 2021
- ISBN:
- 9780198862741
- eISBN:
- 9780191895357
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/oso/9780198862741.003.0008
- Subject:
- Mathematics, Analysis, Applied Mathematics
This chapter addresses writing challenges specific to summarizing a data analysis. The aim is to provide guidance on how to craft clear sentences, choose appropriate words, and convey findings in a ...
More
This chapter addresses writing challenges specific to summarizing a data analysis. The aim is to provide guidance on how to craft clear sentences, choose appropriate words, and convey findings in a compelling manner that is faithful to the data and avoids overstating any implications. This chapter offers advice on how to differentiate statistical terminology from everyday language, represent numbers in text, write mathematical expressions, and choose the correct quantitative nouns and adjectives (e.g., fewer or less, percent or percentage).Less
This chapter addresses writing challenges specific to summarizing a data analysis. The aim is to provide guidance on how to craft clear sentences, choose appropriate words, and convey findings in a compelling manner that is faithful to the data and avoids overstating any implications. This chapter offers advice on how to differentiate statistical terminology from everyday language, represent numbers in text, write mathematical expressions, and choose the correct quantitative nouns and adjectives (e.g., fewer or less, percent or percentage).
Geert Booij
- Published in print:
- 2019
- Published Online:
- May 2019
- ISBN:
- 9780198838852
- eISBN:
- 9780191874833
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/oso/9780198838852.003.0003
- Subject:
- Linguistics, Syntax and Morphology
Dutch complex words are formed by means of suffixation, prefixation, and conversion (change of category without morphological marking). The productivity of a word formation process is subject to ...
More
Dutch complex words are formed by means of suffixation, prefixation, and conversion (change of category without morphological marking). The productivity of a word formation process is subject to various types of restriction, partially having to do with the layer of the lexicon (native or non-native). There are also word formation patterns that are unproductive. The grammar has to specify them nevertheless because these patterns still have a motivating function and reduce the arbitrariness of form-meaning correspondences. Special forms of word coining are blending, clipping, and the formation of acronyms.Less
Dutch complex words are formed by means of suffixation, prefixation, and conversion (change of category without morphological marking). The productivity of a word formation process is subject to various types of restriction, partially having to do with the layer of the lexicon (native or non-native). There are also word formation patterns that are unproductive. The grammar has to specify them nevertheless because these patterns still have a motivating function and reduce the arbitrariness of form-meaning correspondences. Special forms of word coining are blending, clipping, and the formation of acronyms.
D. Gary Miller
- Published in print:
- 2014
- Published Online:
- April 2014
- ISBN:
- 9780199689880
- eISBN:
- 9780191770371
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/acprof:oso/9780199689880.003.0009
- Subject:
- Linguistics, Syntax and Morphology, Theoretical Linguistics
This chapter outlines the essential phonological structure of words that will be of crucial importance in subsequent chapters, and begins a discussion of word abridgements. The prosodic hierarchy is ...
More
This chapter outlines the essential phonological structure of words that will be of crucial importance in subsequent chapters, and begins a discussion of word abridgements. The prosodic hierarchy is presented. Words consist of feet, which are optimally binary trochaic. Feet consist of syllables, and syllables consist of segments which are arranged according to sonority. Support is offered for the leading idea that the sonority hierarchy constrains basic word form. Abridgements are a form of economy. It is discussed how alphabetisms and acronyms become new words in their own right, capable of undergoing affixation and conversion in ways that are impossible for their source phrases. Because of high-tech communication, modern abridgments are shown to be more abstract.Less
This chapter outlines the essential phonological structure of words that will be of crucial importance in subsequent chapters, and begins a discussion of word abridgements. The prosodic hierarchy is presented. Words consist of feet, which are optimally binary trochaic. Feet consist of syllables, and syllables consist of segments which are arranged according to sonority. Support is offered for the leading idea that the sonority hierarchy constrains basic word form. Abridgements are a form of economy. It is discussed how alphabetisms and acronyms become new words in their own right, capable of undergoing affixation and conversion in ways that are impossible for their source phrases. Because of high-tech communication, modern abridgments are shown to be more abstract.
Jonathon Keats
- Published in print:
- 2010
- Published Online:
- November 2020
- ISBN:
- 9780195398540
- eISBN:
- 9780197562826
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/oso/9780195398540.003.0010
- Subject:
- Computer Science, Programming Languages
“It’s really just complete gibberish,” seethed Larry Ellison when asked about the cloud at a financial analysts’ conference in September 2008. “When is this idiocy going to stop?” By March 2009 the ...
More
“It’s really just complete gibberish,” seethed Larry Ellison when asked about the cloud at a financial analysts’ conference in September 2008. “When is this idiocy going to stop?” By March 2009 the Oracle CEO had answered his own question, in a manner of speaking: in an earnings call to investors, Ellison brazenly peddled Oracle’s own forthcoming software as “cloud-computing ready.” Ellison’s capitulation was inevitable. The cloud is ubiquitous, the catchiest online metaphor since Tim Berners-Lee proposed “a way to link and access information of various kinds” at the European Organization for Nuclear Research (CERN) in 1990 and dubbed his creation the WorldWideWeb. In fact while many specific definitions of cloud computing have been advanced by companies seeking to capitalize on the cloud’s popularity—Dell even attempted to trademark the term, unsuccessfully—the cloud has most broadly come to stand for the web, a metaphor for a metaphor reminding us of how unfathomable our era’s signal invention has become. When Berners-Lee conceived the web his ideas were anything but cloudy. His inspiration was hypertext, developed by the computer pioneer Ted Nelson in the 1960s as a means of explicitly linking wide-ranging information in a nonhierarchical way. Nelson envisioned a “docuverse” which he described as “a unified environment available to everyone providing access to this whole space.” In 1980 Berners-Lee implemented this idea in a rudimentary way with a program called Enquire, which he used to cross-reference the software in CERN’s Proton Synchrotron control room. Over the following decade, machines such as the Proton Synchrotron threatened to swamp CERN with scientific data. Looking forward to the Large Hadron Collider, physicists began voicing concern about how they’d ever process their experiments, let alone productively share results with colleagues. Berners-Lee reckoned that, given wide enough implementation, hypertext might rescue them. He submitted a proposal in March 1989 for an “information mesh” accessible to the several thousand CERN employees. “Vague, but interesting,” his boss replied. Adequately encouraged, Berners-Lee spent the next year and a half struggling to refine his idea, and also to find a suitable name.Less
“It’s really just complete gibberish,” seethed Larry Ellison when asked about the cloud at a financial analysts’ conference in September 2008. “When is this idiocy going to stop?” By March 2009 the Oracle CEO had answered his own question, in a manner of speaking: in an earnings call to investors, Ellison brazenly peddled Oracle’s own forthcoming software as “cloud-computing ready.” Ellison’s capitulation was inevitable. The cloud is ubiquitous, the catchiest online metaphor since Tim Berners-Lee proposed “a way to link and access information of various kinds” at the European Organization for Nuclear Research (CERN) in 1990 and dubbed his creation the WorldWideWeb. In fact while many specific definitions of cloud computing have been advanced by companies seeking to capitalize on the cloud’s popularity—Dell even attempted to trademark the term, unsuccessfully—the cloud has most broadly come to stand for the web, a metaphor for a metaphor reminding us of how unfathomable our era’s signal invention has become. When Berners-Lee conceived the web his ideas were anything but cloudy. His inspiration was hypertext, developed by the computer pioneer Ted Nelson in the 1960s as a means of explicitly linking wide-ranging information in a nonhierarchical way. Nelson envisioned a “docuverse” which he described as “a unified environment available to everyone providing access to this whole space.” In 1980 Berners-Lee implemented this idea in a rudimentary way with a program called Enquire, which he used to cross-reference the software in CERN’s Proton Synchrotron control room. Over the following decade, machines such as the Proton Synchrotron threatened to swamp CERN with scientific data. Looking forward to the Large Hadron Collider, physicists began voicing concern about how they’d ever process their experiments, let alone productively share results with colleagues. Berners-Lee reckoned that, given wide enough implementation, hypertext might rescue them. He submitted a proposal in March 1989 for an “information mesh” accessible to the several thousand CERN employees. “Vague, but interesting,” his boss replied. Adequately encouraged, Berners-Lee spent the next year and a half struggling to refine his idea, and also to find a suitable name.
Jonathon Keats
- Published in print:
- 2010
- Published Online:
- November 2020
- ISBN:
- 9780195398540
- eISBN:
- 9780197562826
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/oso/9780195398540.003.0024
- Subject:
- Computer Science, Programming Languages
The big news in the Twitterverse on October 19, 2009, was the sighting of the pentagigatweet. Sent by an out-of-work dotcom executive named Robin Sloan, the six-character text message, a bit of ...
More
The big news in the Twitterverse on October 19, 2009, was the sighting of the pentagigatweet. Sent by an out-of-work dotcom executive named Robin Sloan, the six-character text message, a bit of banter between friends, garnered more attention than the war in Afghanistan or the swine flu pandemic. “Oh lord,” it read. The message was sent at 10:28 a.m. PST. By 3:47 p.m. a CNET news story proclaimed, “Twitter hits 5 billion tweets,” quoting Sloan’s two-word contribution to telecommunications history, and noting that he’d geekily dubbed it the pentagigatweet. The following day newspapers around the world, from the Telegraph in England to Il Messagero in Italy, had picked up the story, yet the most extensive coverage was on Twitter itself, where nearly 30 percent of the estimated 25 million daily messages referenced the benchmark. The numbers were impressive. But more remarkable than the level of popularity achieved in the mere thirty-eight months since the microblogging service launched in 2006 was the degree to which those who used it felt responsible for building it. The megatweeting greeting the pentagigatweet was a sort of collective, networked navel-gazing. In the days following the five billionth text message Twitter was atwitter with self-congratulation. That sense of personal investment, essential to Twitter’s growth, was entirely by design. As Jack Dorsey explained in an interview with the Los Angeles Times about the company he cofounded, “The concept is so simple and so open-ended that people can make of it whatever they wish.” Dorsey based the service on his experience writing dispatch software and his insight that the best way to observe a city in real time was to monitor the dispatches coming from couriers and taxis and ambulances. Twitter was created to put that experience in the hands of ordinary citizens, literally, by asking people to periodically send in text messages by mobile phone answering the question “What are you doing?” All participants would be able to follow the stream of responses. In other words, Twitter was formulated as a sort of relay, utterly dependent on the public for content.Less
The big news in the Twitterverse on October 19, 2009, was the sighting of the pentagigatweet. Sent by an out-of-work dotcom executive named Robin Sloan, the six-character text message, a bit of banter between friends, garnered more attention than the war in Afghanistan or the swine flu pandemic. “Oh lord,” it read. The message was sent at 10:28 a.m. PST. By 3:47 p.m. a CNET news story proclaimed, “Twitter hits 5 billion tweets,” quoting Sloan’s two-word contribution to telecommunications history, and noting that he’d geekily dubbed it the pentagigatweet. The following day newspapers around the world, from the Telegraph in England to Il Messagero in Italy, had picked up the story, yet the most extensive coverage was on Twitter itself, where nearly 30 percent of the estimated 25 million daily messages referenced the benchmark. The numbers were impressive. But more remarkable than the level of popularity achieved in the mere thirty-eight months since the microblogging service launched in 2006 was the degree to which those who used it felt responsible for building it. The megatweeting greeting the pentagigatweet was a sort of collective, networked navel-gazing. In the days following the five billionth text message Twitter was atwitter with self-congratulation. That sense of personal investment, essential to Twitter’s growth, was entirely by design. As Jack Dorsey explained in an interview with the Los Angeles Times about the company he cofounded, “The concept is so simple and so open-ended that people can make of it whatever they wish.” Dorsey based the service on his experience writing dispatch software and his insight that the best way to observe a city in real time was to monitor the dispatches coming from couriers and taxis and ambulances. Twitter was created to put that experience in the hands of ordinary citizens, literally, by asking people to periodically send in text messages by mobile phone answering the question “What are you doing?” All participants would be able to follow the stream of responses. In other words, Twitter was formulated as a sort of relay, utterly dependent on the public for content.
Agnes Kukulska-Hulme
- Published in print:
- 1999
- Published Online:
- November 2020
- ISBN:
- 9780195108385
- eISBN:
- 9780197561041
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/oso/9780195108385.003.0014
- Subject:
- Computer Science, Human-Computer Interaction
So many user interfaces have the appearance of a collection of labels, stuck onto invisible boxes whose contents remain a mystery to users until they have made the effort of opening up each box in ...
More
So many user interfaces have the appearance of a collection of labels, stuck onto invisible boxes whose contents remain a mystery to users until they have made the effort of opening up each box in turn and sifting through its contents. In order to explore what might be called “the language of labeling,” we must first make some observations about the relationship between terms and concepts. Terms are words with special subject meanings; a term may consist of one or more “units” (e.g., user interface). As has been pointed out by Sager (1990), concepts are notoriously difficult to define; it is, however, possible to group them into four basic types: • class concepts or entities, generally corresponding to nouns • property concepts or qualities, for the most part corresponding to adjectives • relation concepts realized though various parts of speech, such as prepositions • function concepts or activities, corresponding to nouns and verbs Looking at the relationship between terms and concepts will help us to think about whether terms can be used to label various types of knowledge and also whether they can properly represent users’ knowledge needs. The present book is structured around linguistic “concepts” in the broad sense, whereas in this chapter, when we refer to concepts, it is in the narrower terminological sense indicated above. “We can use any names we wish as labels for concepts so long as we use them consistently. The only other criterion is convenience” In special subject areas, these same criteria apply, except that communication of specialized knowledge obliges us to take account of how concepts have been labeled by others and how the concepts we are handling fit into a wider scheme. We can draw up systems of concepts and try to specify relationships between them, uncovering along the way the knowledge structures that bind them together. However, we cannot do the same with terms. Terms are existential in nature, that is to say, they signal the existence of an entity, a relationship, an activity, or a quality.
Less
So many user interfaces have the appearance of a collection of labels, stuck onto invisible boxes whose contents remain a mystery to users until they have made the effort of opening up each box in turn and sifting through its contents. In order to explore what might be called “the language of labeling,” we must first make some observations about the relationship between terms and concepts. Terms are words with special subject meanings; a term may consist of one or more “units” (e.g., user interface). As has been pointed out by Sager (1990), concepts are notoriously difficult to define; it is, however, possible to group them into four basic types: • class concepts or entities, generally corresponding to nouns • property concepts or qualities, for the most part corresponding to adjectives • relation concepts realized though various parts of speech, such as prepositions • function concepts or activities, corresponding to nouns and verbs Looking at the relationship between terms and concepts will help us to think about whether terms can be used to label various types of knowledge and also whether they can properly represent users’ knowledge needs. The present book is structured around linguistic “concepts” in the broad sense, whereas in this chapter, when we refer to concepts, it is in the narrower terminological sense indicated above. “We can use any names we wish as labels for concepts so long as we use them consistently. The only other criterion is convenience” In special subject areas, these same criteria apply, except that communication of specialized knowledge obliges us to take account of how concepts have been labeled by others and how the concepts we are handling fit into a wider scheme. We can draw up systems of concepts and try to specify relationships between them, uncovering along the way the knowledge structures that bind them together. However, we cannot do the same with terms. Terms are existential in nature, that is to say, they signal the existence of an entity, a relationship, an activity, or a quality.
John A. Tossell and David J. Vaughan
- Published in print:
- 1992
- Published Online:
- November 2020
- ISBN:
- 9780195044034
- eISBN:
- 9780197560013
- Item type:
- chapter
- Publisher:
- Oxford University Press
- DOI:
- 10.1093/oso/9780195044034.003.0003
- Subject:
- Earth Sciences and Geography, Geochemistry
The early descriptions of chemical bonding in minerals and geological materials utilized purely ionic models. Crystals were regarded as being made up of charged atoms or ions that could be ...
More
The early descriptions of chemical bonding in minerals and geological materials utilized purely ionic models. Crystals were regarded as being made up of charged atoms or ions that could be represented by spheres of a particular radius. Based on interatomic distances obtained from the early work on crystal structures, ionic radii were calculated for the alkali halides (Wasastjerna, 1923) and then for many elements of geochemical interest by Goldschmidt (1926). Modifications to these radius values by Pauling (1927), and others took account of such factors as different coordination numbers and their effects on radii. The widespread adoption of ionic models by geochemists resulted both from the simplicity and ease of application of these models and from the success of rules based upon them. Pauling’s rules (1929) enabled the complex crystal structures of mineral groups such as the silicates to be understood and to a limited extent be predicted; Goldschmidt’s rules (1937) to some degree enabled the distribution of elements between mineral phases or mineral and melt to be understood and predicted. Such rules are further discussed in later chapters. Ionic approaches have also been used more recently in attempts to simulate the structures of complex solids, a topic discussed in detail in Chapter 3. Chemical bonding theory has, of course, been an important component of geochemistry and mineralogy since their inception. Any field with a base of experimental data as broad as that of mineralogy is critically dependent upon theory to give order to the data and to suggest priorities for the accumulation of new data. Just as the bond with predominantly ionic character was the first to be quantitatively understood within solidstate science, the ionic bonding model was the first used to interpret mineral properties. Indeed, modern studies described herein indicate that structural and energetic properties of some minerals may be adequately understood using this model. However, there are numerous indications that an ionic model is inadequate to explain many mineral properties. It also appears that some properties that may be rationalized within an ionic model may also be rationalized assuming other limiting bond types.
Less
The early descriptions of chemical bonding in minerals and geological materials utilized purely ionic models. Crystals were regarded as being made up of charged atoms or ions that could be represented by spheres of a particular radius. Based on interatomic distances obtained from the early work on crystal structures, ionic radii were calculated for the alkali halides (Wasastjerna, 1923) and then for many elements of geochemical interest by Goldschmidt (1926). Modifications to these radius values by Pauling (1927), and others took account of such factors as different coordination numbers and their effects on radii. The widespread adoption of ionic models by geochemists resulted both from the simplicity and ease of application of these models and from the success of rules based upon them. Pauling’s rules (1929) enabled the complex crystal structures of mineral groups such as the silicates to be understood and to a limited extent be predicted; Goldschmidt’s rules (1937) to some degree enabled the distribution of elements between mineral phases or mineral and melt to be understood and predicted. Such rules are further discussed in later chapters. Ionic approaches have also been used more recently in attempts to simulate the structures of complex solids, a topic discussed in detail in Chapter 3. Chemical bonding theory has, of course, been an important component of geochemistry and mineralogy since their inception. Any field with a base of experimental data as broad as that of mineralogy is critically dependent upon theory to give order to the data and to suggest priorities for the accumulation of new data. Just as the bond with predominantly ionic character was the first to be quantitatively understood within solidstate science, the ionic bonding model was the first used to interpret mineral properties. Indeed, modern studies described herein indicate that structural and energetic properties of some minerals may be adequately understood using this model. However, there are numerous indications that an ionic model is inadequate to explain many mineral properties. It also appears that some properties that may be rationalized within an ionic model may also be rationalized assuming other limiting bond types.