Loading, please wait...
:

product description page

Productivity and Reuse in Language : A Theory of Linguistic Computation and Storage (Hardcover) (Timothy

Productivity and Reuse in Language : A Theory of Linguistic Computation and Storage (Hardcover) (Timothy - image 1 of 1

About this item

Language allows us to express and comprehend an unbounded number of thoughts. This fundamental and much-celebrated property is made possible by a division of labor between a large inventory of stored items (e.g., affixes, words, idioms) and a computational system that productively combines these stored units on the fly to create a potentially unlimited array of new expressions. A language learner must discover a language's productive, reusable units and determine which computational processes can give rise to new expressions. But how does the learner differentiate between the reusable, generalizable units (for example, the affix-ness, as in coolness, orderliness, cheapness) and apparent units that do not actually generalize in practice (for example,-th, as in warmth but not coolth)? In this book, Timothy O'Donnell proposes a formal computational model, Fragment Grammars, to answer these questions. This model treats productivity and reuse as the target of inference in a probabilistic framework, asking how an optimal agent can make use of the distribution of forms in the linguistic input to learn the distribution of productive word-formation processes and reusable units in a given language.

O'Donnell compares this model to a number of other theoretical and mathematical models, applying them to the English past tense and English derivational morphology, and showing that Fragment Grammars unifies a number of superficially distinct empirical phenomena in these domains and justifies certain seemingly ad hoc assumptions in earlier theories.

Language allows us to express and comprehend an unbounded number of thoughts. This fundamental and much-celebrated property is made possible by a division of labor between a large inventory of stored items (e.g., affixes, words, idioms) and a computational system that productively combines these stored units on the fly to create a potentially unlimited array of new expressions. A language learner must discover a language's productive, reusable units and determine which computational processes can give rise to new expressions. But how does the learner differentiate between the reusable, generalizable units (for example, the affix-ness, as in coolness, orderliness, cheapness) and apparent units that do not actually generalize in practice (for example,-th, as in warmth but not coolth)? In this book, Timothy O'Donnell proposes a formal computational model, Fragment Grammars, to answer these questions. This model treats productivity and reuse as the target of inference in a probabilistic framework, asking how an optimal agent can make use of the distribution of forms in the linguistic input to learn the distribution of productive word-formation processes and reusable units in a given language.

O'Donnell compares this model to a number of other theoretical and mathematical models, applying them to the English past tense and English derivational morphology, and showing that Fragment Grammars unifies a number of superficially distinct empirical phenomena in these domains and justifies certain seemingly ad hoc assumptions in earlier theories.

Number of Pages: 337
Genre: Science, Language + Art + Disciplines
Format: Hardcover
Publisher: Mit Pr
Author: Timothy J. O'donnell
Language: English
Street Date: August 28, 2015
TCIN: 16890955
UPC: 9780262028844
Item Number (DPCI): 247-42-0899
If the item details above aren’t accurate or complete, we want to know about it. Report incorrect product info.

Guest reviews

Prices, promotions, styles and availability may vary by store & online. See our price match guarantee. See how a store is chosen for you.