Simple Search from the British Library 


Мы поможем в написании ваших работ!



ЗНАЕТЕ ЛИ ВЫ?

Simple Search from the British Library



Type a word or phrase in the search box and press the Go button to see up to 50 random hits from the corpus.

Look up:

You can search for a single word or a phrase, restrict searches by part of speech, search in parts of the corpus only, and much more. This is a link to the simple search facility hosted by the British Library.

The search result will show the total frequency in the corpus and up to 50 examples.

 


Theme 3. Structuralism.

Lecture # 3. Structuralism. Methods of structural analysis.

1. Structural grammatical theories.

2. N. Chomsky's Linguistic Conception. Competence and performance. Basic rules of Generative Grammar (= phrase structure rules).

Structuralism, from which Structural Analysis derives, is the methodological principle that human culture is made up of systems in which a change in any element produces changes in the others. Four basic types of theoretical or critical activities have been regarded as structuralist: the use of language as a structural model, the search for universal functions or actions in texts, the explanation of how meaning is possible, and the post-structuralist denial of objective meaning.

1. Structural grammatical theories.

Two main streams dominated linguistics in the 20th century. The first is structuralism represented by the Prague School that created functional linguistics, the Copenhagen School which created Glossematics (the same as Suassure’s but more abstract – their system of the difference between l-ge and speech is four-member: scheme/norm/usage/act), and the American School that created descriptive linguistics. The second stream of linguistic thinking is structuralism inseparably connected with the name of Noam Chomsky whose work meant a fundamental break­through in the development of linguistic theory in the second half of the XX century.

The essence of Structural Linguistics is in the tenet (belief) that every element has its place in the integrity of language structure and it is important to estab­lish its place, its relation to other elements and consequently to function. Structural Linguistics deals with a real language structure and a scholar's task is to reveal it with the aim of a fuller cognition of language nature and laws of its functioning. Thus, structural grammarians are to a large degree concerned with studying patterns of organization, or structures. They hold the view that linguistics, like physics and chemistry or, say, geology or astronomy, must be preoccupied with structure.

Central to structuralism is the notion of opposition and oppositional analysis which is connected with the Prague School, founded in 1929 by Czech and Russian linguists Vilem Mathesius, Nikolay Trubetzkoy, Roman Jakobson and others.

Oppositional analysis was first introduced by Nikolay Trubetzkoy (1890-1938) who presented an important survey of the problem of phonology in his Grundzuge der Phonologie (' The Fundamentals of Phonology) published in Prague in 1939.

In terms of N.S. Trubetzkoy's theory, opposition is defined as a functionally relevant relationship of partial difference between two partially similar elements of language. The common features of the members of the opposition make up its basis, the features that serve to differentiate them are distinctive features. For example,

/b/ and /p/ are members of a phonological opposition: in English the phoneme /b/ is characterized by voicing, stop articulation (that is, it involves a complete closure), and it is oral, that is non-nasal, whereas /p/ shares all of those characteristics except voicing.

Girl and girlish are members of a morphemic opposition. They are similar as the root morpheme girl- is the same. Their distinctive feature is the suffix –ish.

Man and boy are members of a lexical opposition which is defined as the semantically relevant relationship of partial difference between two partially similar words The distinctive feature in the opposition is the semantic component of age.

Morphological (formal) opposition may be well illustrated by the pair play vs plays which represents the opposition between the third person singular present tense, on the one hand, and the other persons or the singular plus those of the plural, on the other,

Oppositional relations on the sentence level are most obvious in the correlation between Peter plays and Peter does not play which gives the opposition affirmation vs negation. Correlation between Peter plays and Does Peter play? illustrates the opposition declarative vs interrogative sentence.

The main contribution of the American Descriptive School to the study of grammar is the elaboration of techniques of linguistic analysis. The main methods are:

1) the distributional analysis;

2) the Immediate Constituent (IC) analysis (phrase-structure grammar).

American Descriptive School began with the works of Edward Sapir (1884-1939) and Leonard Bloomfield (1887-1949). American linguistics developed under the influence of these two prominent scientists. The ideas laid down in Bloomfield’s book Language (1933) were later developed by Z.S. Harris, R.S. Wells, Ch.F. Hockett, Ch.C. Fries, E.A. Nida.

Descriptive linguistics developed in the United States from the necessity of studying half-known and unknown languages of the American Indian tribes. The Indian languages had no writing and, therefore, had no history. The comparative historical method was of little use here, and the first step of work was to be keen observation and rigid registration of linguistic forms.

The American Indian languages belong to a type that has little in common with the Indo-European languages; they are incorporating languages, devoid of morphological forms of separate words and of corresponding grammatical meanings. Descriptive linguists had therefore to give up analyzing sentences in terms of traditional parts of speech; it was by far more convenient to describe linguistic forms according to their position and their co-occurrence in sentences.

American descriptive linguists began by criticizing the Prague School oppositional method and claiming a more objective — distributional — approach to linguistic analysis.

1) Distributional analysis aims at analyzing linguistic elements in terms of their distribution.

The term distribution is used to denote the possible variants of the immediate lexical, grammatical, and phonetic environment of a linguistic unit (phoneme, morpheme, word, etc.). It implies the position of an element and its combinability with other elements in this or that particular context.

According to Z. Harris [1961: 15-16], the distribution of an element is the total of all environments in which it occurs, i.e. the sum of all the (different) positions (or occurrences) of an element relative to the occurrence of other elements.

Distribution is the matter of speech, it is describable in terms of positions and in terms of positional classes (distributional classes) of fillers for these positions. Therefore, the distribution of an element is given by the distributional formula which is the contextual pattern of the environment characteristic of the concrete occurrence of a linguistic unit. The distributional value of the verb get, for instance, may be shown by the following examples:

get + N  (notional verb) get a book

get + A  (copula-type verb)    get cool

get + Vinf (semi-auxiliary verb of aspect) get to think

get + Ving (semi-auxiliary verb of aspect) get thinking

get + prep + Ving    (semi-auxiliary verb of aspect) get to thinking

get + N + Vinf (causative verb) get him to work

get + N + Ving (causative verb) get the watch going

get + N + Ven        (causative verb) get it done

get + Ven (the so-called passive auxiliary) get killed

have got + Vinf (modal verb) it has got to be done

get + Ven   (function verb of an analytical lexical unit) get rid

 

2) Immediate Constituent analysis = phrase-structure grammar. The concept of IC analysis was first introduced by Leonard Bloomfield and later on developed by Rulon S. Wells and other linguists— K.L. Pike, S. Chatman, E.A. Nida, R.S. Pittman.

(IC) analysis was originally elaborated as an attempt to show how small constituents (or components) in sentences go together to form larger constituents. It was discovered that combinations of linguistic units are usually structured into hierarchically arranged sets of binary constructions, e.g., in the word-group a black dress in severe style we do not relate a to black, black to dress, dress to in, etc. but set up a structure which may be represented as a black dress / in severe style.

An Immediate Constituent (IC) is a group of linguistic elements which functions as a unit in some larger whole.

The division of a construction begins with the larger elements and continues as far as possible. Successive segmentation results in Ultimate Constituents (UC), i.e. two-facet units that cannot be segmented into smaller units having both sound-form and meaning. The Ultimate Constituents of the word-group analyzed above are: a / black / dress / in / severe / style.

So, the fundamental aim of IC analysis is to segment each utterance into (two) maximally independent sequences or ICs, thus revealing the hierarchical structure of this utterance.

The analysis of the constituent structure of the sentence can be represented in different types of diagrams:

1) the following diagram (a table) simply shows the distribution of the constituents at different levels, it can be used to show the types of forms which can substitute for each other at different levels of constituent structure

The │ man saw │ the │ thief   in │ a │ car
Fred   took│ Jean   to │ Honolulu  
He came home

 

2) a candelabra diagram

The man hit the ball

I_ I  I I ___ I

I        I_____I

3) Another type of diagram uses slashes (/) to show the groupings of ICs:

My younger brother / left all his things there.

My // younger brother / left all his things // there.

My // younger /// brother / left /// all his things // there.

My // younger /// brother / left /// all //// his things // there.

My // younger /// brother / left /// all //// his ///// things // there.

 

4) A labeled brackets diagram: the first step is to put brackets (one on each side) around each constituent, and then more brackets around each combination of constituents [Yule 1996: 94].

[[The] [dog]] [[followed] [[the] [boy]]

5) A derivation tree diagram – we can label each constituent with grammatical terms such as Det (article), N (noun), NP (noun phrase), V (verb), VP (verb phrase), S (sentence).

             S

             / \

       NP VP

          / \    / \

      Det N V NP

   The dog ate / \

                        Det N

                       the bone

The resulting sentence could be The dog ate the bone. Such a tree diagram is also called a phrase marker.

The IC theory (or grammar) or the phrase theory (grammar) was the first modern grammar fit for generating sentences. When the IC model was created and diagrammed there was left only one step to its understanding as a generative model, a model by which sentences can be built (or generated).

The most striking figure here is Noam Chomsky with his theory of Generative- Transformational Grammar and the starting-point his book Syntactic Structures (1957). He sought a simple linguistic theory which would generate all the sequences of morphemes (or words) that constitute grammatical English sentences.

 

2. N. Chomsky's Linguistic Conception. Competence and performance. Basic rules of Generative Grammar (= phrase structure rules).

Basic notions of Chomsky's psycholinguistic conception are: lan­guage; language faculty (мовна здатність); cognize a language (пізнати мову); cognition (пізнання); language acquisition (оволодіння мовою); language acquisition device (механізм оволодіння мовою); knowledge of language (знання мови); mind/brain (свідомість/мозок); innateness (природженість); productivity/creativity (продуктивність/креативність); acceptable utterances (прийнятні висловлювання); marginal acceptabil­ity (прийнятність на грані припустимості); competence (компетенція); performance (уживання); grammaticality (граматичність мови); Generative / Universal Grammar (генеративна/універсальна граматика); lan­guage/linguistic universals (мовні/лінгвістичні універсали).

 

Chomsky pleaded for a dynamic approach as represented by his theory of Transformational and Generative Grammar, emphasizing that linguistic theory is mentalistic, concerned with discovering a mental reality underlying actual behaviour. Linguistic theory should contribute to the study of human mental pro­cesses and intellectual capacity. Chomsky called for the grammar of a particular language to be supplemented by a Universal Grammar that is principles valid for all (or majority of) languages. The description of a language should refer to a linguistic competence of a native speaker. Linguistic theory must be, however, concerned primarily with an ideal speaker-hearer in a completely homogeneous community, who knows his language perfectly, and is unaffected by such grammatically irrelevant conditions as memory limitations, shifts of attention, interest, and errors in apply­ing his knowledge of language in actual performance.

 

In his work "Syntactic Structures" which proved to be a turning point in the 20th-century Linguistics and subsequent publications he developed the conception of a Generative Grammar, which departed radically from the structuralism of the previous decades.

A major aim of Generative Grammar was to provide a means of analysing sentences that took ac­count of this underlying level of structure.

N. Chomsky has shifted the focus of linguistic theory from the study of observed behaviour to the investigation of the knowledge that underlies that behaviour. The primary objective of Genera­tive Grammar is to model a speaker's linguistic knowledge.

N.Chomsky characterises linguistic knowledge using the concepts of competence and performance. Competence is a person's implicit knowledge of the rules of a language that makes the production and understanding of an indefinitely large number of new utterances pos­sible while performance is the actual use of language in real situa­tions. Chomsky proposed that competence, rather than performance, is the primary object of linguistic inquiry. Put simply, knowledge of a lan­guage entails mastery of an elaborate system of rules that enables a person to encode and decode a limitless number of utterances in that language.

If knowing a language essentially involves mastering a system of rules, how do humans fulfil this task? Chomsky claims that the linguistic capacity of humans is innate. The general character of linguis­tic knowledge is determined by the nature of the mind which is provided with a specialised language faculty. This faculty is determined by the biology of the brain. The human child is born with a blueprint (проект) of lan­guage which is called Universal Grammar.

The term generative has two meanings for Chomsky. On the one hand, a Generative Grammar is the one that projects any given set of sen­tences upon the infinite set of sentences that constitute the language being described; this is the property of the grammar that reflects the crea­tive aspect of human knowledge. The second sense of generative implies that the rules of the grammar and the conditions under which they operate must be precisely specified. They should be as precisely speci­fied, i.e. formalized, as the rules of arithmetics.

Thus, using the IC model, N. Chomsky worked out the system ofrigid rules for generating (building up) sentences. They are called phrase structure rules. Such rules and the structures they generate, are called recursive. Here S stands for Sentence, NP for Noun Phrase, VP for Verb Phrase, Det for Determiner, Aux for Auxiliary (verb), N for Noun, and V for Verb stem (Scheme 1):

 

The rules are: every sentence (S) or syntactic construction is built up of two immediate constituents: the noun phrase (NP) and verb phrase (VP). The noun phrase consists of two IC: the determiner (Det) and noun or its equivalent (N). The verb phrase consists of the verb (V) and its noun phrase (NP).

Noun Phrase, Verb Phrase, Prepositional Phrase (PP=Prep+NP – e.g. on the bed) are called “ phrasal categories ” (nodes in a tree diagram). A phrasal category corresponds to each of the major lexical categories – N(oun), V(erb), Prep(osition). These phrasal categories are constructed so that a particular phrasal category will dominate the corresponding lexical category, perhaps along with other material: so NP will dominate {... N...}; VP. {... V...} etc. It will be clear that if lexical items are assigned to lexical categories, and if there is this special relationship between lexical and phrasal categories, we will in this way explicitly define the range of possible phrasal categories: there will be as many phrasal categories as there are major lexical categories.

An immediate consequence of this assumption about the relationship between lexical items and lexical and phrasal categories is that all analyses will necessarily be hierarchical in structure. Lexical categories will be grouped into phrasal categories which will themselves be grouped into yet other phrasal categories and so on.

So, the nodes in the tree are labelled with the names of "categories”, like noun, or noun phrase. None of the labels involves the names of grammatical 'functions', like “subject” or “object”. The decision to exclude functional information of this sort from the analyses does not mean that it is irrelevant, but it does mean that it has been decided that structural information is primary, and functional information is of secondary importance. If we need such information, we will have to find a way of deriving it from the trees. For some functions this is straightforward: for example, we can identify the 'subject' as the NP which is dominated by S, and the “object” as the NP dominated by the VP.

If we follow these rules and choose the girl for the first NP, and the dog for the second, we generate the girl chased the dog; but if the choices are made the other way round, we generate the sentence the dog chased the girl. By the simple device of adding a few more words to the rules, suddenly a vast number of sentences can be generated:

V —> chased, saw, liked...

Det —> the, a

N —> girl, man, horse...

the girl chased the horse

the man saw the girl

the horse saw the man etc.

However, if went were introduced into the rules, as a possible V, ungrammatical sentences would come to be generated, such as *the girl went the man. In working out a generative grammar, therefore, a means has to be found to block the generation of this type of sentence, at the same time permitting such sentences as the man went to be generated.

IC analysis is important in case the info expressed by the sentence may be dubious.

Let’s consider the sentence:

I shot an elephant in my pajamas. This grammar permits the sentence to be analyzed in two ways, depending on whether the prepositional phrase in my pajamas describes the elephant or the shooting event.

 

The history of generative syntax since 1957 is the study of the most effi­cient ways of writing rules, so as to ensure that a grammar will generate all the grammatical sentences of a language and none of the ungrammatical ones.

This tiny fragment of a Generative Grammar from the 1950s suffices only to illustrate the general conception underlying the approach. "Real" grammars of this kind contain many rules of considerable complexity and of different types. One special type of rule that was proposed in the first formulations became known as a transformational rule. These rules ena­bled the grammar to show the relationship between sentences that had the same meaning but were of different grammatical form. We will speak about transformational rules in detail a bit later on.


Theme 4. Lecture # 4. Methods of FUNCTIONAL Analysis.

1. Syntagmatic functional relations: dependency and coordination

2. The verb and its dependants.

Thus far we have been looking at linguistic units pr i marily in 'categorial' terms. We have identified categories, like noun or noun phrase, in terms of their internal structure and their distribution within other structures. This is not the only way in which sentence constituents can be viewed, and in this lecture we will consider a different approach. This time we will be concentrating primarily on the functional relationships that can be contracted between constituents. Here we will look at two approaches to description: 1) one describing the various relations established with labels like 'agent' and 'patient' and 2) the other with labels like subject and object.

Functional approaches can be considered to be the inverse of the categorial (structural) approach we have adopted thus far. This time, although we shall of course still be interested in constituent structure, we shall consider functional relationships as primary and constituent structure as interpretive of functional structure.

Let us first consider the general question of grammatical relations. A traditional view distinguishes between “paradigmatic” and "syntagmatic" relations, and we can exemplify the difference between the functional and the categorial approaches by contrasting the way in which they approach this distinction. Paradigmatic relations are essentially relationships of substitutability, the relation between an item in a particularsyntactic position and other items that might have been chosen in that syntactic position but were not: e.g. speaks, is speaking, was speaking, will speak, will be speaking.

  Syntagmatic relations are essentially relationships of со - occurrence, the relations contracted between an item in a particular position in the sentence and other items that occur in other positions in the same sentence, for example, a verb is transitive or intransitive depending on whether it does or does not need a following NP. The relations are clearly interdependent and all syntactic categories need to be classified in terms of both kinds of relation.

In this lecture we shall be primarily interested in syntagmatic functional relations (dependency and coordination). In this case, instead of being concerned with the distribution of constituents, we will be interested in the “dependency” of one constituent on another. In almost all constructions one constituent can be considered to be the "head” and the others 'dependants' of the head. The head will "govern” its dependants and mark this government in various ways. Let’s look at some examples of dependency in the English noun phrase.

In noun phrases with the categorial structure [Det (Adj) N (PP)], like the large cat or the cat on the mat the noun is the "head" of the construction and the adjective, determiner and preposition phrase are its 'modifiers'. The modifiers are dependent on and governed by the head. Semantically the head is the salient (= the most prominent) constituent. With adjectives like large the semantic dependency of the adjective on the noun is particularly marked since the interpretation of 'sc a lar' [ei] adjectives crucially depends on the noun modified: 'large' for a cat is larger than 'large' for a mouse, but smaller than 'large' for an elephant.

Syntactically the dependency is shown in a number of ways. Tо begin with, the head is obligatory, whereas modifiers are normally optional. So we can find NPs with only a head noun as in Cats sit on mats, but we will not find NPs consisting only of modifiers, as in *The sits on the mat, or *The large sits on the mat (except in the latter case in special circumstances, where a head noun is 'understood').

Dependencies of this kind are often also overtly (відкрито) marked, and when they are, it is the head that governs the marking. Three kinds of dependency marking are typically found:

1) morphological marking; 2) the use of special particles or other words like prepositions and 3) marking by word order.

1) Morphological marking in the NP is usually in the form of 'concord' or agreement': a particular grammatical category of the head is copied on to the dependants. English has few concordial constructions, but where they do exist it is clearly the head that determines the concord. So, those determiners that vary for number (that:those: this:these) take their number from the head noun: those cats not * that cats.

2) English has few special particles reserved as markers of dependency apart from (besides) the 'apostrophe s' used to mark the dependent 'genitive' in structures like John's book or the man next door's car. The language does, however, make extensive use of prepositions for this purpose, as in the other 'genitive' structure in English, pint of milk or President of the United States, and in 'postnominal' PP modifiers like man in the moon and holidays in Greece.

3) The third important marker of dependency is word order, and here again the head is the determining factor. The order of the modifiers is relative to the head. In English determiners and adjectives generally precede the head and PP modifiers invariably follow it. In other languages the order may be different.

Characteristically, nouns function as heads and adjectives as modifiers, e.g. a new station – it is the modifier-head pattern (the modifier coming first and the head following it).

But this is not invariably the case, and it is interesting to see what happens in atypical constructions. When we meet an NP consisting of two or more nouns the pressure of the modifier-head pattern invites us to interpret the last as head and the others as modifiers: so a bus station is a type of station and a station bus is a type of bus. Similarly if we encounter an NP with two adjectives and no noun, as in the undeserving poor, it too will be interpreted as a modifier-head construction, poor being construed (=interpreted) as the head and undeserving as its modifier. Indeed, in order to maintain the form-function correlation between head and noun, and modifier and adjective, linguists frequently say about an NP like the filthy rich, either that the adjective rich has been 'recategorized' as a noun, or that there is an 'understood' noun. But this confuses category and function, and it is interesting to observe that it is generally the dependency structure and not the lexical class that determines the interpretation and analysis.

Coordination (“and”) is an important criterion for class membership. It is equally important when we are discussing dependencies. We know that conjuncts are normally of the same category: nouns coordinate (= conjoin) with nouns (one can say man and women), verbs with verbs, NPs with NPs, VPs with VPs and so on. 'Cross-category' coordination, noun with verb, NP with VP and so on, is usually unacceptable (* men and up). Dependency structures show similar restrictions, and cross-dependency coordination (e.g. subj./obj. +adv.mod.: John bought a car and in the morning instead of subj.+subj. or obj.+obj.) is usually as infelicitous (= unfortunate) as cross-category coordination. So, we can coordinate subject expressions (John and his brother bought a house) or object expressions (John bought a car and a house) but not John bought a car and in the morning.

Cross - category coordination is permissible when unlike categories have the same dependency. So, for example, it is possible to coordinate adjectives and PPs providing they are both 'complements': The baby is in bed and asleep (PP and adj), I am going home and to bed (N and PP). In text, cross- dependency coordination is sometimes exploited for a parti­cular rhetorical effect known as ‘zeugma’, as in Mr Pickwick took his hat and his leave.

           

    material obj. non-material obj.

The verb and its dependants

Now we will be concerned with what has traditionally been regarded as one of the most important sets of functional relationships to be found in any language: those between a verb and the various NP and PP constituents with which it co-occurs. We shall be interested both in the general nature of the relationships and also in the ways in which they are syntactically marked.

The problem is that the dependencies operate at various linguistic levels, and at each level the relations are somewhat different. We will consider three levels: 1) a “ surface ” level of word forms in a sentence, 2) a 'deep' level of semantic or logical relations, again within a sentence, and 3) a 'textual' level, which involves relationships both within and between sentences. There are close relationships between these various levels, and some of the terminology involved is sometimes applied to all three levels.

Let’s distinguish between the different levels.

Consider the sentence The farmer is killing the ducklings.

A 'surface'description is a categorial description of surface structure (NP + VP and so on). At this level of structure we have also to identify dependency relations, since various facts of word order, morphological marking and so on hang on these dependencies: we will identify the farmer as the 'grammatical subject' and the ducklings as the 'grammatical object'. The word 'grammatical' is used to indicate that in this case we are dealing with surface grammatical structure. The grammatical subject is an obligatory constituent, it precedes the verb, concords with it in number and is the governor of the concord (in the example we have the farmer is … and not * the farmer are…).

The grammatical object is also an obligatory constituent when we are dealing with a transitive verb like KILL in an active sentence. It will immediately follow the verb. Other dependencies are sometimes marked by word order and sometimes by other markers, particularly prepositions. The indirect object (Mary in John gave the book to Mary / gave Mary the book) can follow the object – then it is marked with the preposition to, or it can precede the object – then it has no prepositional marker. Instrumental expressions are marked with the preposition with (The farmer killed the ducklings with an axe). Benefactive expressions (showing at whom or what the action is aimed) are marked with the preposition for (The farmer killed the ducklings for his wife and so on). At this level of description we can regard the subject as the head of the sentence as a whole, with the VP as its modifier; and within the VP we can regard the verb as the head and its objects etc. as modifiers.

Next, let’s consider our example from the point of view of a 'deep' description. There are different views on what a representation at this level should look like: 1) the transformational model assumes that the representation should be in categorial terms NP, VP, etc.; 2) a functional model usually assumes that at this level the description should be in relational rather than categorial terms. At this level, the verb itself is usually regarded as the head of the construction and all the various NPs etc., including the one that becomes the grammatical subject, as its dependants. These dependants are often referred to as the 'arguments' of the verb. In these terms, instead of talking about 'transitive' an 'intransitive' verbs (this is a surface categorial description) we describe verbs in terms of the number оf arguments they take. Thus, DIE will be a 'one-place verb' since it occurs with only a single argument (The ducklings died), KILL will be a 'two-place verb' since it needs two arguments (The farmer killed the ducklings), GIVE will be a 'three-place verb (The farmer (agent) gave some grain (patient) to the ducklings (goal)) and so on.

If a verb has more than one argument, as in the case of KILL, we distinguish between them in a number of ways since there is a clear difference of meaning between The farmer killed the ducklings and The ducklings killed the farmer.

1) One is to characterize each argument with a label like ‘ agent’ for the performer of the action, 'patient' for the sufferer of the action, 'goal' for the beneficiary of the action and so forth. These descriptions can then be associated with the various arguments, perhaps along the following lines:

DIE (patient–––––––)

KILL (agent –––––– patient)

GIVE (agent ––––––––– patient, goal)

 

2) Another way of distinguishing between the various arguments is to use labels like ' logical subject', 'logical object' and so on, the modifier 'logical' indicating that these relations hold at a 'deep' semantic level of description rather than at the 'surface' level of grammatical description. In these terms, since all verbs have a logical subject, the single argument associated with a one-place verb must be the logical subject. With two- and more place verbs there are various ways in which we might distinguish the arguments. The simplest is to label them as logical subject, object and so on: FARMER [LS], KILL, DUCKLINGS [LO]. In representations at this level of description, the order in which the various NPs are shown identifies their relation. So, for example, we can identify the first argument as the logical subject, the second as the logical object and so on. Thus in KILL (FARMER, DUCKLINGS), corresponding to The farmer killed the ducklings, FARMER is logical subject and DUCKLINGS logical object, and in KILL (DUCKLINGS, FARMER), corresponding to The ducklings killed the farmer DUCKLINGS is logical subject and FARMER is logical object.

At the level of textual structure people sometimes talk about 'psycho­lo­gical subject' or 'topic' (= theme) of a sentence – that is 'what someone is talking or writing about'. The notions of 'topic' or 'psychological subject' involve the concept of a 'text", or 'discourse'. The idea of what is at issue is the following. When we meet a sen­tence in isolation we typically construe the first NP as the 'psychological subject'. Thus, we interpret an active sentence like The farmer is killing the ducklings as a comment on the activities of the farmer, and the corresponding passive The ducklings are being killed by the farmer as a comment on the fate of the ducklings. In cases of this sort, grammatical and psychological subjects coincide because the grammatical subject is the first NP constituent in the sentence. It is, however, possible to construct examples where this is not the case, as in The ducklings, the farmer killed – gram. object (The ducklings) is the psychological subject. In this example, order and particular intonational features are markers of the psychological subject.

 



Поделиться:


Последнее изменение этой страницы: 2021-03-10; просмотров: 161; Нарушение авторского права страницы; Мы поможем в написании вашей работы!

infopedia.su Все материалы представленные на сайте исключительно с целью ознакомления читателями и не преследуют коммерческих целей или нарушение авторских прав. Обратная связь - 18.116.36.192 (0.098 с.)