Wednesday 16 April 2014

OLL 222 .PHONETICS AND PHONOLOGY ---- THE OPEN UNIVERSITY OF TANZANIA BY. MWL. JAPHET MASATU

OLL  222. PHONETICS   AND  PHONOLOGY.

INTRODUCTION.
Phonetics (pronounced /fəˈnɛtɪks/, from the Greek: φωνή, phōnē, 'sound, voice') is a branch of linguistics that comprises the study of the sounds of human speech, or—in the case of sign languages—the equivalent aspects of sign.[1] It is concerned with the physical properties of speech sounds or signs (phones): their physiological production, acoustic properties, auditory perception, and neurophysiological status. Phonology, on the other hand, is concerned with the abstract, grammatical characterization of systems of sounds or signs.
The field of phonetics is a multilayered subject of linguistics that focuses on speech. In the case of oral languages there are three basic areas of study:
  • Articulatory phonetics: the study of the production of speech sounds by the articulatory and vocal tract by the speaker.
  • Acoustic phonetics: the study of the physical transmission of speech sounds from the speaker to the listener.
  • Auditory phonetics: the study of the reception and perception of speech sounds by the listener.
These areas are inter-connected through the common mechanism of sound, such as wavelength (pitch), amplitude, and harmonics.

History

Phonetics was studied as early as 500 BC in the Indian subcontinent, with Pāṇini's account of the place and manner of articulation of consonants in his 5th century BC treatise on Sanskrit. The major Indic alphabets today order their consonants according to Pāṇini's classification. The Phoenicians are credited as the first to create a phonetic writing system, from which all major modern phonetic alphabets are now derived.[2]
Modern phonetics begins with attempts — such as those of Joshua Steele (in Prosodia Rationalis, 1779) and Alexander Melville Bell (in Visible Speech, 1867) — to introduce systems of precise notation for speech sounds.[3][4]

Phonetic transcription

The International Phonetic Alphabet (IPA) is used as the basis for the phonetic transcription of speech. It is based on the Latin alphabet and is able to transcribe most features of speech such as consonants, vowels, and suprasegmental features. Every documented phoneme available within the known languages in the world is assigned its own corresponding symbol.

The difference between phonetics and phonology

Phonology concerns itself with systems of phonemes, abstract cognitive units of speech sound or sign which distinguish the words of a language. Phonetics, on the other hand, concerns itself with the production, transmission, and perception of the physical phenomena which are abstracted in the mind to constitute these speech sounds or signs.
Using an Edison phonograph, Ludimar Hermann investigated the spectral properties of vowels and consonants. It was in these papers that the term formant was first introduced. Hermann also played vowel recordings made with the Edison phonograph at different speeds in order to test Willis' and Wheatstone's theories of vowel production.

Relation to phonology

In contrast to phonetics, phonology is the study of how sounds and gestures pattern in and across languages, relating such concerns with other levels and aspects of language. Phonetics deals with the articulatory and acoustic properties of speech sounds, how they are produced, and how they are perceived. As part of this investigation, phoneticians may concern themselves with the physical properties of meaningful sound contrasts or the social meaning encoded in the speech signal (socio-phonetics) (e.g. gender, sexuality, ethnicity, etc.). However, a substantial portion of research in phonetics is not concerned with the meaningful elements in the speech signal.
While it is widely agreed that phonology is grounded in phonetics, phonology is a distinct branch of linguistics, concerned with sounds and gestures as abstract units (e.g., distinctive features, phonemes, mora, syllables, etc.) and their conditioned variation (via, e.g., allophonic rules, constraints, or derivational rules).[5] Phonology relates to phonetics via the set of distinctive features, which map the abstract representations of speech units to articulatory gestures, acoustic signals, and/or perceptual representations.[6][7][8]

Subfields

Phonetics as a research discipline has three main branches:

Transcription

Phonetic transcription is a system for transcribing sounds that occur in a language, whether oral or sign. The most widely known system of phonetic transcription, the International Phonetic Alphabet (IPA), provides a standardized set of symbols for oral phones.[9][10] The standardized nature of the IPA enables its users to transcribe accurately and consistently the phones of different languages, dialects, and idiolects.[9][11][12] The IPA is a useful tool not only for the study of phonetics, but also for language teaching, professional acting, and speech pathology.[11]

Applications

Applications of phonetics include:
  • forensic phonetics: the use of phonetics (the science of speech) for forensic (legal) purposes.
  • Speech Recognition: the analysis and transcription of recorded speech by a computer system.

See also

  • Experimental phonetics
  • Index of phonetics articles
  • International Phonetic Alphabet
  • Speech processing
  • Acoustics
  • Biometric word list
  • Phonetics departments at universities
  • X-SAMPA
  • ICAO spelling alphabet
  • Buckeye Corpus
  • SaypU (Spell As You Pronounce Universally)

    PHONOLOGY.

    INTRODUCTION.
    Phonology is a branch of linguistics concerned with the systematic organization of sounds in languages. It has traditionally focused largely on study of the systems of phonemes in particular languages (and therefore used to be also called phonemics, or phonematics), but it may also cover any linguistic analysis either at a level beneath the word (including syllable, onset and rhyme, articulatory gestures, articulatory features, mora, etc.) or at all levels of language where sound is considered to be structured for conveying linguistic meaning. Phonology also includes the study of equivalent organizational systems in sign languages.
    The word phonology (as in the phonology of English) can also refer to the phonological system (sound system) of a given language. This is one of the fundamental systems which a language is considered to comprise, like its syntax and its vocabulary.
    Phonology is often distinguished from phonetics. While phonetics concerns the physical production, acoustic transmission and perception of the sounds of speech,[1][2] phonology describes the way sounds function within a given language or across languages to encode meaning. For many linguists, phonetics belongs to descriptive linguistics, and phonology to theoretical linguistics, although establishing the phonological system of a language is necessarily an application of theoretical principles to analysis of phonetic evidence. Note that this distinction was not always made, particularly before the development of the modern concept of phoneme in the mid 20th century. Some subfields of modern phonology have a crossover with phonetics in descriptive disciplines such as psycholinguistics and speech perception, resulting in specific areas like articulatory phonology or laboratory phonology.

    Derivation and definitions

    The word phonology comes from Greek φωνή, phōnḗ, "voice, sound," and the suffix -logy (which is from Greek λόγος, lógos, "word, speech, subject of discussion"). Definitions of the term vary. Nikolai Trubetzkoy in Grundzüge der Phonologie (1939) defines phonology as "the study of sound pertaining to the system of language," as opposed to phonetics, which is "the study of sound pertaining to the act of speech." (the distinction between language and speech being basically Saussure's distinction between langue and parole) [3] More recently, Lass (1998) writes that phonology refers broadly to the subdiscipline of linguistics concerned with the sounds of language, while in more narrow terms, "phonology proper is concerned with the function, behavior and organization of sounds as linguistic items." [1] According to Clark et al. (2007) it means the systematic use of sound to encode meaning in any spoken human language, or the field of linguistics studying this use.[4]

    Development of phonology

    The history of phonology may be traced back to the Ashtadhyayi, the Sanskrit grammar composed by Pāṇini in the 4th century BC. In particular the Shiva Sutras, an auxiliary text to the Ashtadhyayi, introduces what can be considered a list of the phonemes of the Sanskrit language, with a notational system for them that is used throughout the main text, which deals with matters of morphology, syntax and semantics.
    The Polish scholar Jan Baudouin de Courtenay (together with his former student Mikołaj Kruszewski) introduced the concept of the phoneme in 1876, and his work, though often unacknowledged, is considered to be the starting point of modern phonology. He also worked on the theory of phonetic alternations (what is now called allophony and morphophonology), and had a significant influence on the work of Ferdinand de Saussure.
    Nikolai Trubetzkoy, 1920s
    An influential school of phonology in the interwar period was the Prague school. One of its leading members was Prince Nikolai Trubetzkoy, whose Grundzüge der Phonologie (Principles of Phonology),[3] published posthumously in 1939, is among the most important works in the field from this period. Directly influenced by Baudouin de Courtenay, Trubetzkoy is considered the founder of morphophonology, although this concept had also been recognized by de Courtenay. Trubetzkoy also developed the concept of the archiphoneme. Another important figure in the Prague school was Roman Jakobson, who was one of the most prominent linguists of the 20th century.
    In 1968 Noam Chomsky and Morris Halle published The Sound Pattern of English (SPE), the basis for generative phonology. In this view, phonological representations are sequences of segments made up of distinctive features. These features were an expansion of earlier work by Roman Jakobson, Gunnar Fant, and Morris Halle. The features describe aspects of articulation and perception, are from a universally fixed set, and have the binary values + or −. There are at least two levels of representation: underlying representation and surface phonetic representation. Ordered phonological rules govern how underlying representation is transformed into the actual pronunciation (the so-called surface form). An important consequence of the influence SPE had on phonological theory was the downplaying of the syllable and the emphasis on segments. Furthermore, the generativists folded morphophonology into phonology, which both solved and created problems.
    Natural phonology is a theory based on the publications of its proponent David Stampe in 1969 and (more explicitly) in 1979. In this view, phonology is based on a set of universal phonological processes which interact with one another; which ones are active and which are suppressed is language-specific. Rather than acting on segments, phonological processes act on distinctive features within prosodic groups. Prosodic groups can be as small as a part of a syllable or as large as an entire utterance. Phonological processes are unordered with respect to each other and apply simultaneously (though the output of one process may be the input to another). The second most prominent natural phonologist is Patricia Donegan (Stampe's wife); there are many Natural Phonologists in Europe, though also a few others in the U.S., such as Geoffrey Nathan. The principles of natural phonology were extended to morphology by Wolfgang U. Dressler, who founded natural morphology.
    In 1976 John Goldsmith introduced autosegmental phonology. Phonological phenomena are no longer seen as operating on one linear sequence of segments, called phonemes or feature combinations, but rather as involving some parallel sequences of features which reside on multiple tiers. Autosegmental phonology later evolved into feature geometry, which became the standard theory of representation for the theories of the organization of phonology as different as lexical phonology and optimality theory.
    Government phonology, which originated in the early 1980s as an attempt to unify theoretical notions of syntactic and phonological structures, is based on the notion that all languages necessarily follow a small set of principles and vary according to their selection of certain binary parameters. That is, all languages' phonological structures are essentially the same, but there is restricted variation that accounts for differences in surface realizations. Principles are held to be inviolable, though parameters may sometimes come into conflict. Prominent figures include Jonathan Kaye, Jean Lowenstamm, Jean-Roger Vergnaud, Monik Charette, John Harris, and many others.
    In a course at the LSA summer institute in 1991, Alan Prince and Paul Smolensky developed optimality theory—an overall architecture for phonology according to which languages choose a pronunciation of a word that best satisfies a list of constraints ordered by importance: a lower-ranked constraint can be violated when the violation is necessary in order to obey a higher-ranked constraint. The approach was soon extended to morphology by John McCarthy and Alan Prince, and has become a dominant trend in phonology. Though this usually goes unacknowledged, optimality theory was strongly influenced by natural phonology; both view phonology in terms of constraints on speakers and their production, though these constraints are formalized in very different ways.[citation needed] The appeal to phonetic grounding of constraints in various approaches has been criticized by proponents of 'substance-free phonology', especially Mark Hale and Charles Reiss.[5][6]
    Broadly speaking, government phonology (or its descendant, strict-CV phonology) has a greater following in the United Kingdom, whereas optimality theory is predominant in the United States.[citation needed]

    Analysis of phonemes

    An important part of traditional, pre-generative, schools of phonology is studying which sounds can be grouped into distinctive units within a language; these units are known as phonemes. For example, in English, the "p" sound in pot is aspirated (pronounced [pʰ]), while that in spot is not aspirated (pronounced [p]). However, English speakers intuitively treat both sounds as variations (allophones) of the same phonological category, that is, of the phoneme /p/. (Traditionally, it would be argued that if an aspirated [pʰ] were interchanged with the unaspirated [p] in spot, native speakers of English would still hear the same words; that is, the two sounds are perceived as "the same" /p/.) In some other languages, however, these two sounds are perceived as different, and they are consequently assigned to different phonemes. For example, in Thai, Hindi, and Quechua, there are minimal pairs of words for which aspiration is the only contrasting feature (two words with different meanings that are identical except that one has an aspirated sound where the other has an unaspirated one).
    The vowels of modern (Standard) Arabic and (Israeli) Hebrew from the phonemic point of view. Note the intersection of the two circles—the distinction between short a, i and u is made by both speakers, but Arabic lacks the mid articulation of short vowels, while Hebrew lacks the distinction of vowel length.
    The vowels of modern (Standard) Arabic and (Israeli) Hebrew from the phonetic point of view. Note that the two circles are totally separate—none of the vowel-sounds made by speakers of one language is made by speakers of the other.
    Part of the phonological study of a language therefore involves looking at data (phonetic transcriptions of the speech of native speakers) and trying to deduce what the underlying phonemes are and what the sound inventory of the language is. The presence or absence of minimal pairs, as mentioned above, is a frequently used criterion for deciding whether two sounds should be assigned to the same phoneme. However other considerations often need to be taken into account as well.
    The particular contrasts which are phonemic in a language can change over time. At one time, [f] and [v], two sounds that have the same place and manner of articulation and differ in voicing only, were allophones of the same phoneme in English, but later came to belong to separate phonemes. This is one of the main factors of historical change of languages as described in historical linguistics.
    The findings and insights of speech perception and articulation research complicate the traditional and somewhat intuitive idea of interchangeable allophones being perceived as the same phoneme. First, interchanged allophones of the same phoneme can result in unrecognizable words. Second, actual speech, even at a word level, is highly co-articulated, so it is problematic to expect to be able to splice words into simple segments without affecting speech perception.
    Different linguists therefore take different approaches to the problem of assigning sounds to phonemes. For example, they differ in the extent to which they require allophones to be phonetically similar. There are also differing ideas as to whether this grouping of sounds is purely a tool for linguistic analysis, or reflects an actual process in the way the human brain processes a language.
    Since the early 1960s, theoretical linguists have moved away from the traditional concept of a phoneme, preferring to consider basic units at a more abstract level, as a component of morphemes; these units can be called morphophonemes, and analysis using this approach is called morphophonology.

    Other topics in phonology

    In addition to the minimal units that can serve the purpose of differentiating meaning (the phonemes), phonology studies how sounds alternate, i.e. replace one another in different forms of the same morpheme (allomorphs), as well as, for example, syllable structure, stress, feature geometry, accent, and intonation.
    Phonology also includes topics such as phonotactics (the phonological constraints on what sounds can appear in what positions in a given language) and phonological alternation (how the pronunciation of a sound changes through the application of phonological rules, sometimes in a given order which can be feeding or bleeding,[7]) as well as prosody, the study of suprasegmentals and topics such as stress and intonation.
    The principles of phonological analysis can be applied independently of modality because they are designed to serve as general analytical tools, not language-specific ones. The same principles have been applied to the analysis of sign languages (see Phonemes in sign languages), even though the sub-lexical units are not instantiated as speech sounds.

    See also

  • Absolute neutralisation
  • Cherology
  • English phonology
  • List of phonologists (also Category: Phonologists)
  • Morphophonology
  • Phoneme
  • Phonological development
  • Phonological hierarchy
  • Prosody (linguistics)
  • Phonotactics
  • Second language phonology
  • Phonological rule