# Grammatical theory

From transformational grammar to constraint-based approaches *Fifth revised edition*

Stefan Müller

Textbooks in Language Sciences 1

### Textbooks in Language Sciences

Editors: Stefan Müller, Antonio Machicao y Priemer

Editorial Board: Claude Hagège, Marianne Mithun, Anatol Stefanowitsch, Foong Ha Yap

In this series:


ISSN: 2364-6209

# Grammatical theory

From transformational grammar to constraint-based approaches

*Fifth revised edition*

Stefan Müller

Stefan Müller. 2023. *Grammatical theory: From transformational grammar to constraint-based approaches. Fifth revised and extended edition*. (Textbooks in Language Sciences 1). Berlin: Language Science Press.

This title can be downloaded at: http://langsci-press.org/catalog/book/380 © 2023, Stefan Müller Published under the Creative Commons Attribution 4.0 Licence (CC BY 4.0): http://creativecommons.org/licenses/by/4.0/ ISBN: 978-3-96110-402-4 (Digital) 978-3-98554-060-0 (Softcover) ISSN: 2364-6209 DOI: 10.5281/zenodo.7628029 Source code available from www.github.com/langsci/25 Errata: paperhive.org/documents/remote?type=langsci&id=25

Cover and concept of design: Ulrike Harbort Translation: Andrew Murphy, Stefan Müller Typesetting: Stefan Müller Proofreading: Viola Auermann, Armin Buch, Andreea Calude, Rong Chen, Matthew Czuba, Leonel de Alencar, Christian Döhler, Joseph T. Farquharson, Andreas Hölzl, Gianina Iordăchioaia, Paul Kay, Anne Kilgus, Sandra Kübler, Timm Lichte, Antonio Machicao y Priemer, Michelle Natolo, Stephanie Natolo, Sebastian Nordhoff, Elizabeth Pankratz, Parviz Parsafar, Conor Pyle, Daniela Schröder, Eva Schultze-Berndt, Alec Shaw, Benedikt Singpiel, Anelia Stefanova, Neal Whitman, Viola Wiegand Open reviewing: Armin Buch, Leonel de Alencar, Andreas Hölzl, Dick Hudson, Gianina Iordăchioaia, Paul Kay, Timm Lichte, Antonio Machicao y Priemer, Andrew McIntyre, Arne Nymos, Sebastian Nordhoff, Neal Whitman Fonts: Libertinus, Arimo, DejaVu Sans Mono Typesetting software: XƎLATEX

Language Science Press xHain Grünberger Str. 16 10243 Berlin, Germany http://langsci-press.org

Storage and cataloguing done by FU Berlin

For Max


**Preface ix**








# **Preface**

This book is an extended and revised version of my German book *Grammatiktheorie* (Müller 2013a). It introduces various grammatical theories that play a role in current theorizing or have made contributions in the past which are still relevant today. I explain some foundational assumptions and then apply the respective theories to what can be called the "core grammar" of German. I have decided to stick to the object language that I used in the German version of this book since many of the phenomena that will be dealt with cannot be explained with English as the object language. Furthermore, many theories have been developed by researchers with English as their native language and it is illuminative to see these theories applied to another language. I show how the theories under consideration deal with arguments and adjuncts, active/passive alternations, local reorderings (so-called scrambling), verb position, and fronting of phrases over larger distances (the verb second property of the Germanic languages without English).

The second part deals with foundational questions that are important for developing theories. This includes a discussion of the question of whether we have innate domain specific knowledge of language (UG), the discussion of psycholinguistic evidence concerning the processing of language by humans, a discussion of the status of empty elements and of the question whether we construct and perceive utterances holistically or rather compositionally, that is, whether we use phrasal or lexical constructions. The second part is not intended as a standalone book although the printed version of the book is distributed this way for technical reasons (see below). Rather it contains topics that are discussed again and again when frameworks are compared. So instead of attaching these discussions to the individual chapters they are organized in a separate part of the book.

Unfortunately, linguistics is a scientific field with a considerable amount of terminological chaos. I therefore wrote an introductory chapter that introduces terminology in the way it is used later on in the book. The second chapter introduces phrase structure grammars, which plays a role for many of the theories that are covered in this book. I use these two chapters (excluding the Section 2.3 on interleaving phrase structure grammars and semantics) in introductory courses of our BA curriculum for German studies. Advanced readers may skip these introductory chapters. The following chapters are structured in a way that should make it possible to understand the introduction of the theories without any prior knowledge. The sections regarding new developments and classification are more ambitious: they refer to chapters still to come and also point to other publications that are relevant in the current theoretical discussion but cannot be repeated or summarized in this book. These parts of the book address advanced students and researchers. I use this book for teaching the syntactic aspects of the theories

#### Preface

in a seminar for advanced students in our BA. The slides are available on my web page. The second part of the book, the general discussion, is more ambitious and contains the discussion of advanced topics and current research literature.

This book only deals with relatively recent developments. For a historical overview, see for instance Robins (1997), Jungen & Lohnstein (2006). I am aware of the fact that chapters on Integrational Linguistics (Lieb 1983, Eisenberg 2004, Nolda 2007), Optimality Theory (Prince & Smolensky 1993; Grimshaw 1997; G. Müller 2000), Role and Reference Grammar (Van Valin 1993) and Relational Grammar (Perlmutter 1983, Perlmutter & Rosen 1984) are missing. I will leave these theories for later editions.

The original German book was planned to have 400 pages, but it finally was much bigger: the first German edition has 525 pages and the second German edition has 564 pages. I added a chapter on Dependency Grammar and one on Minimalism to the English version and now the book has 861 pages. I tried to represent the chosen theories appropriately and to cite all important work. Although the list of references is over 85 pages long, I was probably not successful. I apologize for this and any other shortcomings.

# **Acknowledgments**

I would like to thank David Adger, Jason Baldridge, Felix Bildhauer, Emily M. Bender, Stefan Evert, Gisbert Fanselow, Sandiway Fong, Hans-Martin Gärtner, Kim Gerdes, Adele Goldberg, Bob Levine, Paul Kay, Jakob Maché, Guido Mensching, Laura Michaelis, Geoffrey Pullum, Uli Sauerland, Roland Schäfer, Jan Strunk, Remi van Trijp, Shravan Vasishth, Tom Wasow, and Stephen Wechsler for discussion and Monika Budde, Philippa Cook, Laura Kallmeyer, Tibor Kiss, Gisela Klann-Delius, Jonas Kuhn, Timm Lichte, Anke Lüdeling, Jens Michaelis, Bjarne Ørsnes, Andreas Pankau, Christian Pietsch, Frank Richter, Ivan Sag, and Eva Wittenberg for comments on earlier versions of the German edition of this book and Thomas Groß, Dick Hudson, Sylvain Kahane, Paul Kay, Haitao Liu (刘海涛), Andrew McIntyre, Sebastian Nordhoff, Tim Osborne, Andreas Pankau, and Christoph Schwarze for comments on earlier versions of this book. Thanks to Leonardo Boiko and Sven Verdoolaege for pointing out typos. Special thanks go to Martin Haspelmath for very detailed comments on an earlier version of the English book.

This book was the first Language Science Press book that had an open review phase (see below). I thank Dick Hudson, Paul Kay, Antonio Machicao y Priemer, Andrew McIntyre, Sebastian Nordhoff, and one anonymous open reviewer for their comments. Theses comments are documented at the download page of this book. In addition the book went through a stage of community proofreading (see also below). Some of the proofreaders did much more than proofreading, their comments are highly appreciated and I decided to publish these comments as additional open reviews. Armin Buch, Leonel de Alencar, Andreas Hölzl, Gianina Iordăchioaia, Timm Lichte, Antonio Machicao y Priemer, and Neal Whitman deserve special mention here.

I thank Wolfgang Sternefeld and Frank Richter, who wrote a detailed review of the German version of this book (Sternefeld & Richter 2012). They pointed out some mistakes and omissions that were corrected in the second edition of the German book and which are of course not present in the English version.

Thanks to all the students who commented on the book and whose questions lead to improvements. Lisa Deringer, Aleksandra Gabryszak, Simon Lohmiller, Theresa Kallenbach, Steffen Neuschulz, Reka Meszaros-Segner, Lena Terhart and Elodie Winckel deserve special mention.

Since this book is built upon all my experience in the area of grammatical theory, I want to thank all those with whom I ever discussed linguistics during and after talks at conferences, workshops, summer schools or via email. Werner Abraham, John Bateman, Dorothee Beermann, Rens Bod, Miriam Butt, Manfred Bierwisch, Ann Copestake, Holger Diessel, Kerstin Fischer, Dan Flickinger, Peter Gallmann, Petter Haugereid, Lars Hellan, Tibor Kiss, Wolfgang Klein, Hans-Ulrich Krieger, Andrew McIntyre, Detmar Meurers, Gereon Müller, Martin Neef, Manfred Sailer, Anatol Stefanowitsch, Peter Svenonius, Michael Tomasello, Hans Uszkoreit, Gert Webelhuth, Daniel Wiechmann and Arne Zeschel deserve special mention.

I thank Sebastian Nordhoff for a comment regarding the completion of the subject index entry for *recursion*.

Andrew Murphy translated part of Chapter 1 and the Chapters 2–3, 5–10, and 12–23. Many thanks for this!

I also want to thank the 27 community proofreaders (Viola Auermann, Armin Buch, Andreea Calude, Rong Chen, Matthew Czuba, Leonel de Alencar, Christian Döhler, Joseph T. Farquharson, Andreas Hölzl, Gianina Iordăchioaia, Paul Kay, Anne Kilgus, Sandra Kübler, Timm Lichte, Antonio Machicao y Priemer, Michelle Natolo, Stephanie Natolo, Sebastian Nordhoff, Elizabeth Pankratz, Parviz Parsafar, Conor Pyle, Daniela Schröder, Eva Schultze-Berndt, Alec Shaw, Benedikt Singpiel, Anelia Stefanova, Neal Whitman, Viola Wiegand) that each worked on one or more chapters and really improved this book. I got more comments from every one of them than I ever got for a book done with a commercial publisher. Some comments were on content rather than on typos and layout issues. No proofreader employed by a commercial publisher would have spotted these mistakes and inconsistencies since commercial publishers do not have staff that knows all the grammatical theories that are covered in this book.

During the past years, a number of workshops on theory comparison have taken place. I was invited to three of them. I thank Helge Dyvik and Torbjørn Nordgård for inviting me to the fall school for Norwegian PhD students *Languages and Theories in Contrast*, which took place 2005 in Bergen. Guido Mensching and Elisabeth Stark invited me to the workshop *Comparing Languages and Comparing Theories: Generative Grammar and Construction Grammar*, which took place in 2007 at the Freie Universität Berlin and Andreas Pankau invited me to the workshop *Comparing Frameworks* in 2009 in Utrecht. I really enjoyed the discussion with all participants of these events and this book benefited enormously from the interchange.

I thank Peter Gallmann for the discussion of his lecture notes on GB during my time in Jena. The Sections 3.1.3–3.4 have a structure that is similar to the one of his script and take over a lot. Thanks to David Reitter for the LATEX macros for Combinatorial Categorial Grammar, to Mary Dalrymple and Jonas Kuhn for the LFG macros and example structures, and to Laura Kallmeyer for the LATEX sources of most of the TAG analyses. Most of the trees have been adapted to the forest package because of compatibility issues with XƎLATEX, but the original trees and texts were a great source of inspiration and

#### Preface

without them the figures in the respective chapters would not be half as pretty as they are now.

I thank Sašo Živanović for implementing the LATEX package forest. It really simplifies typesetting of trees, dependency graphs, and type hierarchies. I also thank him for individual help via email and on stackexchange. In general, those active on stackexchange could not be thanked enough: most of my questions regarding specific details of the typesetting of this book or the implementation of the LATEX classes that are used by Language Science Press now have been answered within several minutes. Thank you! Since this book is a true open access book under the CC-BY license, it can also be an open source book. The interested reader finds a copy of the source code at https://github.com/langsci/25. By making the book open source I pass on the knowledge provided by the LATEX gurus and hope that others benefit from this and learn to typeset their linguistics papers in nicer and/or more efficient ways.

Viola Auermann and Antje Bahlke, Sarah Dietzfelbinger, Lea Helmers, and Chiara Jancke cannot be thanked enough for their work at the copy machines. Viola also helped a lot with proof reading prefinal stages of the translation. I also want to thank my (former) lab members Felix Bildhauer, Philippa Cook, Janna Lipenkova, Jakob Maché, Bjarne Ørsnes and Roland Schäfer, which were mentioned above already for other reasons, for their help with teaching. During the years from 2007 until the publication of the first German edition of this book two of the three tenured positions in German Linguistics were unfilled and I would have not been able to maintain the teaching requirements without their help and would have never finished the *Grammatiktheorie* book.

I thank Tibor Kiss for advice in questions of style. His diplomatic way always was a shining example for me and I hope that this is also reflected in this book.

# **On the way this book is published**

I started to work on my dissertation in 1994 and defended it in 1997. During the whole time the manuscript was available on my web page. After the defense, I had to look for a publisher. I was quite happy to be accepted to the series *Linguistische Arbeiten* by Niemeyer, but at the same time I was shocked about the price, which was 186.00 DM for a paperback book that was written and typeset by me without any help by the publisher (twenty times the price of a paperback novel).<sup>1</sup> This basically meant that my book was depublished: until 1998 it was available from my web page and after this it was available in libraries only. My Habilitationsschrift was published by CSLI Publications for a much more reasonable price. When I started writing textbooks, I was looking for alternative distribution channels and started to negotiate with no-name print on demand publishers. Brigitte Narr, who runs the Stauffenburg publishing house, convinced me to publish my HPSG textbook with her. The copyrights for the German version of the book remained with me so that I could publish it on my web page. The collaboration was successful so that I also published my second textbook about grammatical theory with Stauffenburg. I think that this book has a broader relevance and should be accessible

<sup>1</sup>As a side remark: in the meantime Niemeyer was bought by de Gruyter and closed down. The price of the book is now 139.95 e / \$ 196.00. The price in Euro corresponds to 273.72 DM. Update 23.06.2020: The book is sold for 149.95 e / \$ 169,82 now.

for non-German-speaking readers as well. I therefore decided to have it translated into English. Since Stauffenburg is focused on books in German, I had to look for another publisher. Fortunately the situation in the publishing sector changed quite dramatically in comparison to 1997: we now have high profile publishers with strict peer review that are entirely open access. I am very glad about the fact that Brigitte Narr sold the rights of my book back to me and that I can now publish the English version with Language Science Press under a CC-BY license.

# **Language Science Press: scholar-owned high quality linguistic books**

In 2012 a group of people found the situation in the publishing business so unbearable that they agreed that it would be worthwhile to start a bigger initiative for publishing linguistics books in platinum open access, that is, free for both readers and authors. I set up a web page and collected supporters, very prominent linguists from all over the world and all subdisciplines and Martin Haspelmath and I then founded Language Science Press. At about the same time the DFG had announced a program for open access monographs and we applied (Müller & Haspelmath 2013) and got funded (two out of 18 applications got funding). The money was used for a coordinator (Dr. Sebastian Nordhoff) and an economist (Debora Siller), two programmers (Carola Fanselow and Dr. Mathias Schenner), who worked on the publishing plattform Open Monograph Press (OMP) and on conversion software that produces various formats (ePub, XML, HTML) from our LATEX code. Svantje Lilienthal worked on the documentation of OMP, produced screencasts and did user support for authors, readers and series editors.

OMP was extended by open review facilities and community-building gamification tools (Müller 2012a, Müller & Haspelmath 2013). All Language Science Press books are reviewed by at least two external reviewers. Reviewers and authors may agree to publish these reviews and thereby make the whole process more transparent (see also Pullum (1984) for the suggestion of open reviewing of journal articles). In addition there is an optional second review phase: the open review (see the blog posts by Sebastian Nordhoff about the reviewing options at Language Science Press<sup>2</sup> ). This second optional reviewing phase is completely open to everybody. The whole community may comment on the document that is published by Language Science Press. After this second review phase, which usually lasts for two months, authors may revise their publication and an improved version will be published. The English version of this book was the first book to go through this open review phase. The Chinese translation was also open for comments on Paperhive. Readers left more than 2500 comments<sup>3</sup> , which were automatically fed into the version control and bug tracking system used by Language Science Press<sup>4</sup> .

Currently, Language Science Press has 26 series on various subfields of linguistics with high profile series editors from all continents. There are 437 members in the respective editorial boards coming from 49 countries. We have 134 published books with more than

<sup>2</sup> https://userblogs.fu-berlin.de/langsci-press/2015/05/27/axes-of-open-review/, 2020-09-03.

<sup>3</sup> https://paperhive.org/documents/items/Zf2Qf47i6nf2, 2020-09-03.

<sup>4</sup> https://github.com/langsci/177/, 2020-09-03.

#### Preface

1 Mio downloads.<sup>5</sup> 1196 authors from 53 countries have published books or chapters with Language Science Press as of March 2020 and there are 572 expressions of interest.

Series editors are responsible for delivering manuscripts that are typeset in LATEX, but they are supported by a web-based typesetting infrastructure that was set up by Language Science Press and there is also conversion software converting Word manuscripts into LATEX. Proofreading is community-based. Until now 224 people helped improve our books. Their work is documented in the Hall of Fame: http://langsci-press.org/ hallOfFame.

Language Science Press is a community-based publisher, but apart from the press managers Martin Haspelmath and me, there are two people who are employed for the central organization and typesetting: Sebastian Nordhoff, who is also a press manager, and Felix Kopecky, who does typesetting. Both have 50 % positions. In the period of 2018–2020, these two positions got payed with the help of financial support by 115 academic institutions including Harvard, the MIT, and Berkeley and by societies like EuroSLA.<sup>6</sup> The Language Science Press approach is endorsed by the leading scholars Noam Chomsky, Adele Goldberg, and Steven Pinker, who sent letters of support in 2017.<sup>7</sup> The fundraising for the period 2021–2023 is ongoing.

If you think that textbooks like this one should be freely available to whoever wants to read them and that publishing scientific results should not be left to profit-oriented publishers, then you can join the Language Science Press community and support us in various ways: you can register with Language Science Press and have your name listed on our supporter page with more than 1000 other enthusiasts, you may devote your time and help with proofreading. We are also looking for institutional supporters like foundations, societies, linguistics departments or university libraries. Detailed information on how to support us is provided at the following webpage: http://langsci-press.org/supportUs. In case of questions, please contact me or the Language Science Press coordinator at contact@langsci-press.org.

Berlin, September 04, 2020 Stefan Müller

<sup>5</sup>Downloads by robots excluded, the English version of this textbook was downloaded over 40,000 times since 2016.

<sup>6</sup>A full list of supporting institutions is available at: http://langsci-press.org/knowledgeunlatched.

<sup>7</sup> "Very pleased to learn about this fine initiative, a most valuable way to bring to the general public the results of scholarly work. It's a cliché, but true, that we all stand on the shoulders of giants, and rely on the cultural wealth provided to everyone by past generations. It is only proper that the public should gain access to whatever contemporary scholarship can contribute, and the ideas outlined here seem to be a very promising way to realize this ideal." Noam Chomsky, 2017-02-01.

<sup>&</sup>quot;Language Science Press is setting a standard for freely accessible articles and books that are carefully reviewed." Adele Goldberg, 2017-05-02.

<sup>&</sup>quot;Sharing data and methods is one of the pillars of scholarly inquiry. The knowledge created by scholars belongs to everyone, and open access publications are a major pathway to realizing that ideal. Language Science Press, together with Knowledge Unlatched, provides an excellent way for us to make our findings available to the global public." Steven Pinker, 2017-01-22.

# **Foreword of the second edition**

The first edition of this book was published almost exactly two years ago. The book has app. 15,000 downloads and is used for teaching and in research all over the world. This is what every author and every teacher dreams of: distribution of knowledge and accessibility for everybody. The foreword of the first edition ends with a description of Language Science Press in 2016. This is the situation now:<sup>8</sup> We have 324 expressions of interest and 58 published books. Books are published in 20 book series with 263 members of editorial boards from 44 different countries from six continents. We have a total of 175,000 downloads. 138 linguists from all over the world have participated in proofreading. There are currently 296 proofreaders registered with Language Science Press. Language Science Press is a community-based publisher, but there is one person who manages everything: Sebastian Nordhoff. His position has to be paid. We were successful in acquiring financial support by almost 100 academic institutions including Harvard, the MIT, and Berkeley.<sup>9</sup> If you want to support us by just signing the list of supporters, by publishing with us, by helping as proofreader or by convincing your librarian/institution to support Language Science Press financially, please refer to http: //langsci-press.org/supportUs.

After these more general remarks concerning Language Science Press I describe the changes I made for the second edition and I thank those who pointed out mistakes and provided feedback.

I want to thank Wang Lulu for pointing out some typos that she found while translating the book to Chinese. Thanks for both the typos and the translation.

Fritz Hamm noticed that the definition of Intervention (see p. 138) was incomplete and pointed out some inconsistencies in translations of predicates in Section 2.3. I turned some straight lines in Chapter 3 into triangles and added a discussion of different ways to represent movement (see Figure 3.8 on p. 99). I now explain what SpecIP stands for and I added footnote 9 on SpecIP as label in trees. I extended the discussion of Pirahã in Section 13.1.8.2 and added lexical items that show that Pirahã-like modification without recursion can be captured in a straightforward way in Categorial Grammar.

I reorganized the HPSG chapter to be in line with more recent approaches assuming the valence features spr and comps (Sag 1997, Müller 2023b) rather than a single valence feature. I removed the section on the local feature in Sign-based Construction Grammar (Section 10.6.2.2 in the first edition) since it was build on the wrong assumption that the filler would be identical to the representation in the valence specification. In Sag (2012: 536) only the information in syn and sem is shared.

I added the example (60) on page 632 that shows a difference in choice of preposition in a prepositional object in Dutch vs. German. Since the publication of the first English edition of the Grammatical Theory textbook I worked extensively on the phrasal approach to benefactive constructions in LFG (Asudeh, Giorgolo & Toivonen 2014). Section 21.2.2 was revised and adapted to what will be published as Müller (2018a). There

<sup>8</sup> See http://userblogs.fu-berlin.de/langsci-press/2018/01/18/achievements-2017/ for the details and graphics. <sup>9</sup>A full list of supporting institutions is available here: http://langsci-press.org/knowledgeunlatched.

#### Preface

is now a brief chapter on complex predicates in TAG and Categorial Grammar/HPSG (Chapter 22), that shows that valence-based approaches allow for an underspecification of structure. Valence is potential structure, while theories like TAG operate with actual structure.

Apart from this I fixed several minor typos, added and updated some references and URLs. Thanks to Philippa Cook, Timm Lichte, and Antonio Machicao y Priemer for pointing out typos. Thanks to Leonel Figueiredo de Alencar, Francis Bond, John Carroll, Alexander Koller, Emily M. Bender, and Glenn C. Slayden for pointers to literature. Sašo Živanović helped adapting version 2.0 of the forest package so that it could be used with this large book. I am very graceful for this nice tree typesetting package and all the work that went into it.

The source code of the book and the version history is available on GitHub. Issues can be reported there: https://github.com/langsci/25. The book is also available on paperhive, a platform for collective reading and annotation: https://paperhive.org/documents/ remote?type=langsci&id=380. It would be great if you would leave comments there.

Berlin, 21st March 2018 Stefan Müller

# **Foreword of the third edition**

Since more and more researchers and students are using the book now, I get feedback that helps improve it. For the third edition I added references, expanded the discussion of the passive in GB (Section 3.4) a bit and fixed typos.<sup>10</sup>

Chapter 4 contained figures from different chapters of Adger (2003). Adger introduces the DP rather late in the book and I had a mix of NPs and DPs in figures. I fixed this in the new edition. I am so used to talking about NPs that there were references to NP in the general discussion that should have been references to DP. I fixed this as well. I added a figure explaining the architecture in the Phase model of Minimalism and since the figures mention the concept of *numeration*, I added a footnote on numerations. I also added a figure depicting the architecture assumed in Minimalist theories with Phases (right figure in Figure 4.1).

I thank Frank Van Eynde for pointing out eight typos in his review of the first edition. They have been fixed. He also pointed out that the placement of arg-st in the feature geometry of signs in HPSG did not correspond to Ginzburg & Sag (2000), where arg-st is on the top level rather than under cat. Note that earlier versions of this book had argst under cat and there had never been proper arguments for why it should not be there, which is why many practitioners of HPSG have kept it in that position (Müller 2018a). One reason to keep arg-st on the top level is that arg-st is appropriate for lexemes only. If arg-st is on the sign level, this can be represented in the type hierarchy: lexemes and words have an arg-st feature, phrases do not. If arg-st is on the cat level, one would

<sup>10</sup>A detailed list of issues and fixes can be found in the GitHub repository of this book at https://github.com/ langsci/25/.

have to distinguish between cat values that belong to lexemes and words on the one hand and phrasal cat values on the other hand, which would require two additional subtypes of the type *cat*. The most recent version of the computer implementation done in Stanford by Dan Flickinger has arg-st under local (2019-01-24). So, I was tempted to leave everything as it was in the second edition of the book. However, there is a real argument for not having arg-st under cat. cat is assumed to be shared in coordinations and cat contains valence features for subjects and complements. The values of these valence features are determined by a mapping from arg-st. In some analyses, extracted elements are not mapped to the valence features and the same is sometimes assumed for omitted elements. To take an example consider (1):

(1) He saw and helped the hikers.

*saw* and *helped* are coordinated and the members in the valence lists have to be compatible. Now if one coordinates a ditransitive verb with one omitted argument with a strictly transitive verb, this would work under the assumption that the omitted argument is not part of the valence representation. But if arg-st is part of cat, coordination would be made impossible since a three-place argument structure list would be incompatible with a two-place list. Hence I decided to change this in the third edition and represent arg-st outside of cat from now on.<sup>11</sup>

I changed the section about Sign-Based Construction Grammar (SBCG) again. An argument about nonlocal dependencies and locality was not correct, since Sag (2012: 166) does not share all information between filler and extraction side. The argument is now revised and presented as Section 10.6.2.3. Reviewing Müller (2021b), Bob Borsley pointed out to me that the xarg feature is a way to circumvent locality restrictions that is actually used in SBCG. I added a footnote to the section on locality in SBCG.

A brief discussion of Welke's (2019) analysis of the German clause structure was added to the chapter about Construction Grammar (see Section 10.3).

The analysis of a verb-second sentence in LFG is now part of the LFG chapter (Figure 7.5 on page 244) and not just an exercise in the appendix. A new exercise was designed instead of the old one and the old one was integrated into the main text.

I added a brief discussion of Osborne's (2018a) claim that Dependency Grammars are simpler than phrase structure grammars (p. 413).

Geoffrey Pullum pointed out at the HPSG conference in 2019 that the label *constraintbased* may not be the best for the theories that are usually referred to with it. Changing the term in this work would require to change the title of the book. The label *model theoretic* may be more appropriate but some implementational work in HPSG and LFG not considering models may find the term inappropriate. I hence decided to stick to the established term.

I followed the advice by Lisbeth Augustinus and added a preface to Part II of the book that gives the reader some orientation as to what to expect.

<sup>11</sup>Note added on 2021-11-05: The editors of the HPSG handbook (Müller, Abeillé, Borsley & Koenig 2021) decided to put arg-st under cat (Abeillé & Borsley 2021: 19) because of the analysis of complex predicates in French. On French complex predicates see Godard & Samvelian (2021: 426–427).

#### Preface

I thank Mikhail Knyazev for pointing out to me that the treatment of V to I to C movement in the German literature differs from the lowering that is assumed for English and that some further references are needed in the chapter on Government & Binding.

Working on the Chinese translation of this book, Wang Lulu pointed out some typos and a wrong example sentence in Chinese. Thanks for these comments!

I thank Bob Borsley, Gisbert Fanselow, Hubert Haider and Pavel Logacev for discussion and Ina Baier for a mistake in a CG proof and Jonas Benn for pointing out some typos to me. Thanks to Tabea Reiner for a comment on gradedness. Thanks also to Antonio Machicao y Priemer for yet another set of comments on the second edition and to Elizabeth Pankratz for proofreading parts of what I changed.

Berlin, 15th August 2019 Stefan Müller

# **Foreword of the fourth edition**

I fixed several typos, added and updated URLs and DOIs in the book and in the list of references. I added a footnote to Chapter 3 concerning the assignment of semantic roles across phrase boundaries (footnote 21 on p. 111). I thank Andreas Pankau for discussion on this point.

I added a paragraph discussing John Torr's implementational work (pages 177–180). I thank Shalom Lappin and Richard Sproat for discussion of implementation issues.

A small paragraph for further reading was added to Chapter 21 on phrasal vs. lexical analyses.

Language Science Press will publish a handbook on Head-Driven Phrase Structure Grammar hopefully later this year (Müller, Abeillé, Borsley & Koenig 2021). It contains several chapters comparing other syntactic theories to HPSG. I added the respective references to the further readings sections of the chapters for Lexical Functional Grammar, Categorial Grammar, Construction Grammar, and Minimalism.

This edition is the first edition that uses precompiled trees. Setting this up was not straightforward. I am really grateful to Sašo Živanović for helping me and adapting the forest package so that everything runs smoothly and efficiently. This saves me a lot of time and reduces the energy consumption of my computer dramatically.

Berlin, 2nd September 2020 Stefan Müller

# **Foreword of the fifth edition**

I want to thank Philip Kime for help with biber, the tool that Language Science Press is using for creating lists of references and for manipulating bibliography databases. The bibliography was updated and manually checked since this was done for the HPSG

Handbook (Müller, Abeillé, Borsley & Koenig 2021), whose list of references overlaps with the publications cited here. Papers now have DOIs wherever possible.

Ladis Duffet pointed out a mistake in Section 1.7.4, which probably confused many who tried to make sense of this section in earlier editions. I fixed a mistake at the beginning of Section 8.5.1: it now reads "backward application" instead of "forward application". I fixed the Case Principle in the chapter on HPSG. The first two clauses did not mention that they only apply to verbal heads. As pointed out to me by an anonymous reader, the type of the AVM in (11) should have been *woman* rather than *female person*. I also changed the values of father and mother into *man* and *woman*. The top-most type in Figure 6.1 has to be *electric device* rather than *electrical appliance*, since this is the name used in the text. I fixed some brackets in the Categorial Grammar derivation in Figure 8.9. There were just too many brackets to keep track of everything …. Thanks to Matthew Korte and Pascal Hohmann for spotting this (independently)! Léonie Cujé found superflous brackets in Figure 8.5. They were removed. Thanks! Figure 9.10 on page 296 contained some strange brackets, which I have removed now.

I also want to thank an anonymous reader for sending patches to the LATEX files correcting some typos and wrong or missing words in glosses.

Since the last two reviews of the book complained about the classification and new developments sections referring to material not introduced yet, I decided to make the structure of the book more explicit by repeating the introductory remark from page ix at the beginning of all the advanced sections. I still think that this is the correct structure of the book to introduce a certain framework and then evaluate it. The only way to fairly evaluate a theory is to compare it to other theories. This cannot be done without knowledge of the theories to be compared. So readers interested in such comparisons should read the introductory parts of the chapters and then come back to the evaluation part and the parts discussing further developments. Culicover (2021) remarked that it is unclear how the book is supposed to be used for teaching. The book is already used at many, many universities worldwide, but those who want to know how I use it may check out my slides, which are available both as PDF and source code on GitHub: https://github.com/stefan11/grammatical-theory-slides. During Corona times I also put recordings of my lessons online: https://www.youtube.com/watch?v=\_W6nVRnC0NA& list=PLXwGGsuPxWRotmEg5LStGTxZWEkqKXmrh&index=1.

Berlin, 23rd November 2022 Stefan Müller

# **Part I**

# **Background and specific theories**

# **1 Introduction and basic terms**

The aim of this chapter is to explain why we actually study syntax (Section 1.1) and why it is important to formalize our findings (Section 1.2). Some basic terminology will be introduced in Sections 1.3–1.8: Section 1.3 deals with criteria for dividing up utterances into smaller units. Section 1.4 shows how words can be grouped into classes; that is I will introduce criteria for assigning words to categories such as verb or adjective. Section 1.5 introduces the notion of heads, in Section 1.6 the distinction between arguments and adjuncts is explained, Section 1.7 defines grammatical functions and Section 1.8 introduces the notion of topological fields, which can be used to characterize certain areas of the clause in languages such as German.

Unfortunately, linguistics is a scientific field with a considerable amount of terminological chaos. This is partly due to the fact that terminology originally defined for certain languages (e.g., Latin, English) was later simply adopted for the description of other languages as well. However, this is not always appropriate since languages differ from one another considerably and are constantly changing. Due to the problems caused by this, the terminology started to be used differently or new terms were invented. When new terms are introduced in this book, I will always mention related terminology or differing uses of each term so that readers can relate this to other literature.

# **1.1 Why do syntax?**

Every linguistic expression we utter has a meaning. We are therefore dealing with what has been referred to as form-meaning pairs (de Saussure 1916). A word such as *tree* in its specific orthographical form or in its corresponding phonetic form is assigned the meaning *tree*′ . Larger linguistic units can be built up out of smaller ones: words can be joined together to form phrases and these in turn can form sentences.

The question which now arises is the following: do we need a formal system which can assign a structure to these sentences? Would it not be sufficient to formulate a pairing of form and meaning for complete sentences just as we did for the word *tree* above?

That would, in principle, be possible if a language were just a finite list of word sequences. If we were to assume that there is a maximum length for sentences and a maximum length for words and thus that there can only be a finite number of words, then the number of possible sentences would indeed be finite. However, even if we were to restrict the possible length of a sentence, the number of possible sentences would still be enormous. The question we would then really need to answer is: what is the maximum length of a sentence? For instance, it is possible to extend all the sentences in (1):

	- b. [A sentence is a sentence] is a sentence.
	- c. that Max thinks that Julius knows that Otto claims that Karl suspects that Richard confirms that Friederike is laughing

In (1b), something is being said about the group of words *a sentence is a sentence*, namely that it is a sentence. One can, of course, claim the same for the whole sentence in (1b) and extend the sentence once again with *is a sentence*. The sentence in (1c) has been formed by combining *that Friederike is laughing* with *that*, *Richard* and *confirms*. The result of this combination is a new sentence *that Richard confirms that Friederike is laughing*. In the same way, this has then been extended with *that*, *Karl* and *suspects*. Thus, one obtains a very complex sentence which embeds a less complex sentence. This partial sentence in turn contains a further partial sentence and so on. (1c) is similar to those sets of Russian nesting dolls, also called *matryoshka*: each doll contains a smaller doll which can be painted differently from the one that contains it. In just the same way, the sentence in (1c) contains parts which are similar to it but which are shorter and involve different nouns and verbs. This can be made clearer by using brackets in the following way:

(2) that Max thinks [that Julius knows [that Otto claims [that Karl suspects [that Richard confirms [that Friederike is laughing]]]]]

We can build incredibly long and complex sentences in the ways that were demonstrated in (1).<sup>1</sup>

It would be arbitrary to establish some cut-off point up to which such combinations can be considered to belong to our language (Harris 1957: 208; Chomsky 1957: 23). It is also implausible to claim that such complex sentences are stored in our brains as a single complex unit. While evidence from psycholinguistic experiments shows that highly frequent or idiomatic combinations are stored as complex units, this could not be the case for sentences such as those in (1). Furthermore, we are capable of producing utterances that we have never heard before and which have also never been uttered or written down previously. Therefore, these utterances must have some kind of structure, there must be patterns which occur again and again. As humans, we are able to build such complex structures out of simpler ones and, vice-versa, to break down complex utterances into their component parts. Evidence for humans' ability to make use of rules for combining words into larger units has now also been provided by research in neuroscience (Pulvermüller 2010: 170).

<sup>1</sup> It is sometimes claimed that we are capable of constructing infinitely long sentences (Nowak, Komarova & Niyogi 2001: 117; Kim & Sells 2008: 3; Dan Everett in O'Neill & Wood (2012) at 25:19; Chesi 2015: 67; Lin 2017: 5; Martorell 2018: 2; Wikipedia entry of Biolinguistics/Minimalism, 2019-10-17) or that Chomsky made such claims (Leiss 2003: 341). This is, however, not correct since every sentence has to come to an end at some point. Even in the theory of formal languages developed in the Chomskyan tradition, there are no infinitely long sentences. This is especially clear in Minimalist theories (Chapter 4) since there are only binary combinations. When combining two objects (words or groups of words) of finite length, one gets a new object of finite length. There is no way to get infinitely long sentences. Rather the claim is that certain formal grammars can describe a set containing infinitely many finite sentences (Chomsky 1957: 13). See also Pullum & Scholz (2010) and Section 13.1.8 on the issue of recursion in grammar and for claims about the infinite nature of language.

1.1 Why do syntax?

It becomes particularly evident that we combine linguistic material in a rule-governed way when these rules are violated. Children acquire linguistic rules by generalizing from the input available to them. In doing so, they produce some utterances which they could not have ever heard previously:

(3) Ich I festhalte part.hold die. them Intended: 'I hold them tight.'

Friederike, who was learning German, was at the stage of acquiring the rule for the position of the finite verb (namely, second position). What she did here, however, was to place the whole verb, including a separable particle *fest* 'tight', in the second position although the particle should be realized at the end of the clause (*Ich halte die fest.*).

If we do not wish to assume that language is merely a list of pairings of form and meaning, then there must be some process whereby the meaning of complex utterances can be obtained from the meanings of the smaller components of those utterances. Syntax reveals something about the way in which the words involved can be combined, something about the structure of an utterance. For instance, knowledge about subject-verb agreement helps with the interpretation of the following sentences in German:

	- b. Die the Mädchen girls schlafen. sleep.3pl 'The girls sleep.'
	- c. Die the Frau woman kennt know.3sg die the Mädchen. girls 'The woman knows the girls.'
	- d. Die the Frau woman kennen know.3pl die the Mädchen. girls 'The girls know the woman.'

The sentences in (4a,b) show that a singular or a plural subject requires a verb with the corresponding inflection. In (4a,b), the verb only requires one argument so the function of *die Frau* 'the woman' and *die Mädchen* 'the girls' is clear. In (4c,d) the verb requires two arguments and *die Frau* 'the woman' and *die Mädchen* 'the girls' could appear in either argument position in German. The sentences could mean that the woman knows somebody or that somebody knows the woman. However, due to the inflection on the verb and knowledge of the syntactic rules of German, the hearer knows that there is only one available reading for (4c) and (4d), respectively.

It is the role of syntax to discover, describe and explain such rules, patterns and structures.

(Friederike, 2;6)

# **1.2 Why do it formally?**

The two following quotations give a motivation for the necessity of describing language formally:

Precisely constructed models for linguistic structure can play an important role, both negative and positive, in the process of discovery itself. By pushing a precise but inadequate formulation to an unacceptable conclusion, we can often expose the exact source of this inadequacy and, consequently, gain a deeper understanding of the linguistic data. More positively, a formalized theory may automatically provide solutions for many problems other than those for which it was explicitly designed. Obscure and intuition-bound notions can neither lead to absurd conclusions nor provide new and correct ones, and hence they fail to be useful in two important respects. I think that some of those linguists who have questioned the value of precise and technical development of linguistic theory have failed to recognize the productive potential in the method of rigorously stating a proposed theory and applying it strictly to linguistic material with no attempt to avoid unacceptable conclusions by ad hoc adjustments or loose formulation. (Chomsky 1957: 5)

As is frequently pointed out but cannot be overemphasized, an important goal of formalization in linguistics is to enable subsequent researchers to see the defects of an analysis as clearly as its merits; only then can progress be made efficiently. (Dowty 1979: 322)

If we formalize linguistic descriptions, it is easier to recognize what exactly a particular analysis means. We can establish what predictions it makes and we can rule out alternative analyses. A further advantage of precisely formulated theories is that they can be written down in such a way that computer programs can process them. When a theoretical analysis is implemented as a computationally processable grammar fragment, any inconsistency will become immediately evident. Such implemented grammars can then be used to process large collections of text, so-called corpora, and they can thus establish which sentences a particular grammar cannot yet analyze or which sentences are assigned the wrong structure. For more on using computer implementation in linguistics see Bierwisch (1963: 163), Müller (1999b: Chapter 22) and Bender (2008b) as well as Section 3.6.2.

# **1.3 Constituents**

If we consider the sentence in (5), we have the intuition that certain words form a unit.

(5) Alle all Studenten students lesen read während during dieser this Zeit time Bücher. books 'All the students are reading books at this time.'

For example, the words *alle* 'all' and *Studenten* 'students' form a unit which says something about who is reading. *während* 'during', *dieser* 'this' and *Zeit* 'time' also form a

unit which refers to a period of time during which the reading takes place, and *Bücher* 'books' says something about what is being read. The first unit is itself made up of two parts, namely *alle* 'all' and *Studenten* 'students'. The unit *während dieser Zeit* 'during this time' can also be divided into two subcomponents: *während* 'during' and *dieser Zeit* 'this time'. *dieser Zeit* 'this time' is also composed of two parts, just like *alle Studenten* 'all students' is.

Recall that in connection with (1c) above we talked about the sets of Russian nesting dolls (*matryoshkas*). Here, too, when we break down (5) we have smaller units which are components of bigger units. However, in contrast to the Russian dolls, we do not just have one smaller unit contained in a bigger one but rather, we can have several units which are grouped together in a bigger one. The best way to envisage this is to imagine a system of boxes: one big box contains the whole sentence. Inside this box, there are four other boxes, which each contain *alle Studenten* 'all students', *lesen* 'reads', *während dieser Zeit* 'during this time' and *Bücher* 'books', respectively. Figure 1.1 illustrates this.

Figure 1.1: Words and phrases in boxes

In the following section, I will introduce various tests which can be used to show how certain words seem to "belong together" more than others. When I speak of a *word sequence*, I generally mean an arbitrary linear sequence of words which do not necessarily need to have any syntactic or semantic relationship, e.g., *Studenten lesen während* 'students read during' in (5). A sequence of words which form a structural entity, on the other hand, is referred to as a *phrase*. Phrases can consist of words as in *this time* or of combinations of words with other phrases as in *during this time*. The parts of a phrase and the phrase itself are called *constituents*. So all elements that are in a box in Figure 1.1 are constituents of the sentence.

Following these preliminary remarks, I will now introduce some tests which will help us to identify whether a particular string of words is a constituent or not.

### **1.3.1 Constituency tests**

There are a number of ways to test the constituent status of a sequence of words. In the following subsections, I will present some of these. In Section 1.3.2, we will see that there are cases when simply applying a test "blindly" leads to unwanted results.

#### **1.3.1.1 Substitution**

If it is possible to replace a sequence of words in a sentence with a different sequence of words and the acceptability of the sentence remains unaffected, then this constitutes evidence for the fact that each sequence of words forms a constituent.

1 Introduction and basic terms

In (6), *den Mann* 'the man' can be replaced by the string *eine Frau* 'a woman'. This is an indication that both of these word sequences are constituents.

	- b. Er he kennt knows [eine a Frau]. woman 'He knows a woman.'

Similary, in (7a), the string *das Buch zu lesen* 'the book to read' can be replaced by *dem Kind das Buch zu geben* 'the child the book to give'.

	- b. Er he versucht, tries [dem the Kind child das the Buch book zu to geben]. give 'He is trying to give the child the book.'

This test is referred to as the *substitution test*.

### **1.3.1.2 Pronominalization**

Everything that can be replaced by a pronoun forms a constituent. In (8), one can for example refer to *der Mann* 'the man' with the pronoun *er* 'he':

(8) a. [Der the Mann] man schläft. sleeps 'The man is sleeping.' b. Er he schläft. sleeps 'He is sleeping.'

It is also possible to use a pronoun to refer to constituents such as *das Buch zu lesen* 'the book to read' in (7a), as is shown in (9):

	- b. Klaus Klaus versucht tries das that auch. also 'Klaus is trying to do that as well.'

The pronominalization test is another form of the substitution test.

### **1.3.1.3 Question formation**

A sequence of words that can be elicited by a question forms a constituent:

	- b. Wer who arbeitet? works 'Who is working?'

Question formation is a specific case of pronominalization. One uses a particular type of pronoun (an interrogative pronoun) to refer to the word sequence.

Constituents such as *das Buch zu lesen* in (7a) can also be elicited by questions, as (11) shows:

(11) Was what versucht tries er? he 'What does he try?'

### **1.3.1.4 Permutation test**

If a sequence of words can be moved without adversely affecting the acceptability of the sentence in which it occurs, then this is an indication that this word sequence forms a constituent.

In (12), *keiner* 'nobody' and *dieses Kind* 'this child' exhibit different orderings, which suggests that *dieses* 'this' and *Kind* 'child' belong together.

	- b. dass that [dieses this Kind] child keiner nobody kennt knows 'that nobody knows this child'

On the other hand, it is not plausible to assume that *keiner dieses* 'nobody this' forms a constituent in (12a). If we try to form other possible orderings by trying to move *keiner dieses* 'nobody this' as a whole, we see that this leads to unacceptable results:<sup>2</sup>

(13) a. \* dass that Kind child keiner nobody dieses this kennt knows

<sup>2</sup> I use the following notational conventions for all examples: '\*' indicates that a sentence is ungrammatical, '#' denotes that the sentence has a reading which differs from the intended one and finally '§' should be understood as a sentence which is deviant for semantic or information-structural reasons, for example, because the subject must be animate, but is in fact inanimate in the example in question, or because there is a conflict between constituent order and the marking of given information through the use of pronouns.

#### 1 Introduction and basic terms

b. \* dass that Kind child kennt knows keiner nobody dieses this

Furthermore, constituents such as *das Buch zu lesen* 'to read the book' in (7a) can be moved:

	- b. Er he hat has [das the Buch book zu to lesen] read noch part nicht not versucht. tried
	- c. Er he hat has noch part nicht not versucht, tried [das the Buch book zu to lesen]. read

### **1.3.1.5 Fronting**

Fronting is a further variant of the movement test. In German declarative sentences, only a single constituent may normally precede the finite verb:

	- b. [Bücher] books lesen read alle all Studenten students während during der the vorlesungsfreien lecture.free Zeit. time
	- c. \* [Alle all Studenten] students [Bücher] books lesen read während during der the vorlesungsfreien lecture.free Zeit. time
	- d. \* [Bücher] books [alle all Studenten] students lesen read während during der the vorlesungsfreien lecture.free Zeit. time

The possibility for a sequence of words to be fronted (that is to occur in front of the finite verb) is a strong indicator of constituent status.

### **1.3.1.6 Coordination**

If two sequences of words can be conjoined then this suggests that each sequence forms a constituent.

In (16), *der Mann* 'the man' and *die Frau* 'the woman' are conjoined and the entire coordination is the subject of the verb *arbeiten* 'to work'. This is a good indication of the fact that *der Mann* and *die Frau* each form a constituent.

(16) [Der the Mann] man und and [die the Frau] woman arbeiten. work.3PL 'The man and the woman work.'

The example in (17) shows that phrases with *to*-infinitives can be conjoined:

(17) Er he hat had versucht, tried [das the Buch book zu to lesen] read und and [es it dann then unauffällig secretly verschwinden disappear zu to lassen]. let

'He tried to read the book and then make it quietly disappear.'

# **1.3.2 Some comments on the status of constituent tests**

It would be ideal if the tests presented here delivered clear-cut results in every case, as the empirical basis on which syntactic theories are built would thereby become much clearer. Unfortunately, this is not the case. There are in fact a number of problems with constituent tests, which I will discuss in what follows.

### **1.3.2.1 Expletives**

There is a particular class of pronouns – so-called *expletives* – which do not denote people, things, or events and are therefore non-referential. An example of this is *es* 'it' in (18).

	- b. Regnet rains es? it 'Is it raining?'
	- c. dass that es it jetzt now regnet rains 'that it is raining now'

As the examples in (18) show, *es* can either precede the verb, or follow it. It can also be separated from the verb by an adverb, which suggests that *es* should be viewed as an independent unit.

Nevertheless, we observe certain problems with the aforementioned tests. Firstly, *es* 'it' is restricted with regard to its movement possibilities, as (19a) and (20b) show.

	- b. dass that jetzt now keiner nobody klatscht claps 'that nobody is clapping now'

Unlike the accusative object *einen Mann* 'a man' in (20c,d), the expletive in (20b) cannot be fronted.

Secondly, substitution and question tests also fail:

	- b. \* Wer who / was what regnet? rains

Similarly, the coordination test cannot be applied either:

(22) \* Es it und and der the Mann man regnet rains / regnen. rain

The failure of these tests can be easily explained: weakly stressed pronouns such as *es* are preferably placed before other arguments, directly after the conjunction (*dass* in (18c)) and directly after the finite verb in (20a) (see Abraham 1995: 570). If an element is placed in front of the expletive, as in (19a), then the sentence is rendered ungrammatical. The reason for the ungrammaticality of (20b) is the general ban on accusative *es* appearing in clause-initial position. Although such cases exist, they are only possible if *es* 'it' is referential (Lenerz 1994: 162; Gärtner & Steinbach 1997: 4).

The fact that we could not apply the substitution and question tests is also no longer mysterious as *es* is not referential in these cases. We can only replace *es* 'it' with another expletive such as *das* 'that'. If we replace the expletive with a referential expression, we derive a different semantic interpretation. It does not make sense to ask about something semantically empty or to refer to it with a pronoun.

It follows from this that not all of the tests must deliver a positive result for a sequence of words to count as a constituent. That is, the tests are therefore not a necessary requirement for constituent status.

### **1.3.2.2 Movement**

The movement test is problematic for languages with relatively free constituent order, since it is not always possible to tell what exactly has been moved. For example, the string *gestern dem Mann* 'yesterday the man' occupies different positions in the following examples:

	- b. weil because gestern yesterday dem the Mann man keiner nobody geholfen helped hat has 'because nobody helped the man yesterday'

One could therefore assume that *gestern* 'yesterday' and *dem Mann* 'the man', which of course do not form a constituent, have been moved together. An alternative explanation for the ordering variants in (23) is that adverbs can occur in various positions in the clause and that only *dem Mann* 'the man' has been moved in front of *keiner* 'nobody' in (23b). In any case, it is clear that *gestern* and *dem Mann* have no semantic relation and that it is impossible to refer to both of them with a pronoun. Although it may seem at first glance as if this material had been moved as a unit, we have seen that it is in fact not tenable to assume that *gestern dem Mann* 'yesterday the man' forms a constituent.

### **1.3.2.3 Fronting**

As mentioned in the discussion of (15), the position in front of the finite verb is normally occupied by a single constituent. The possibility for a given word sequence to be placed in front of the finite verb is sometimes even used as a clear indicator of constituent status, and even used in the definition of *Satzglied*<sup>3</sup> . An example of this is taken from Bußmann (1983), but is no longer present in Bußmann (1990):<sup>4</sup>

**Satzglied test** A procedure based on → topicalization used to analyze complex constituents. Since topicalization only allows a single constituent to be moved to the beginning of the sentence, complex sequences of constituents, for example adverb phrases, can be shown to actually consist of one or more constituents. In the example *Ein Taxi quält sich im Schrittempo durch den Verkehr* 'A taxi was struggling at walking speed through the traffic', *im Schrittempo* 'at walking speed' and *durch den Verkehr* 'through the traffic' are each constituents as both can be fronted independently of each other. (Bußmann 1983: 446)

The preceding quote has the following implications:

• Some part of a piece of linguistic material can be fronted independently → This material does not form a constituent.

<sup>3</sup> *Satzglied* is a special term used in grammars of German, referring to a constituent on the clause level (Eisenberg et al. 2005: 783).

<sup>4</sup> The original formulation is: **Satzgliedtest** [Auch: Konstituententest]. Auf der → Topikalisierung beruhendes Verfahren zur Analyse komplexer Konstituenten. Da bei Topikalisierung jeweils nur eine Konstituente bzw. ein → Satzglied an den Anfang gerückt werden kann, lassen sich komplexe Abfolgen von Konstituenten (z. B. Adverbialphrasen) als ein oder mehrere Satzglieder ausweisen; in *Ein Taxi quält sich im Schrittempo durch den Verkehr* sind *im Schrittempo* und *durch den Verkehr* zwei Satzglieder, da sie beide unabhängig voneinander in Anfangsposition gerückt werden können.

#### 1 Introduction and basic terms

• Linguistic material can be fronted together → This material forms a constituent.

It will be shown that both of these prove to be problematic. The first implication is cast into doubt by the data in (24):

	- b. [Über about den the Abbau reduction der of.the Agrarsubventionen] agricultural.subsidies erreichten reached Schröder Schröder und and Chirac Chirac keine no Einigung. agreement

Although parts of the noun phrase *keine Einigung über den Abbau der Agrarsubventionen* 'no agreement on the reduction of agricultural subsidies' can be fronted individually, we still want to analyze the entire string as a noun phrase when it is not fronted as in (25):

(25) Schröder Schröder und and Chirac Chirac erreichten reached [keine no Einigung agreement über about den the Abbau reduction der of.the Agrarsubventionen]. agricultural.subsidies

The prepositional phrase *über den Abbau der Agrarsubventionen* 'on the reduction of agricultural subsidies' is semantically dependent on *Einigung* 'agreement' cf. (26):

(26) Sie they einigen agree sich refl über about die the Agrarsubventionen. agricultural.subsidies 'They agree on the agricultural subsidies.'

This word sequence can also be fronted together:

(27) [Keine no Einigung agreement über about den the Abbau reduction der of.the Agrarsubventionen] agricultural.subsidies erreichten reached Schröder Schröder und and Chirac. Chirac

In the theoretical literature, it is assumed that *keine Einigung über den Abbau der Agrarsubventionen* forms a constituent which can be "split up" under certain circumstances. In such cases, the individual subconstituents can be moved independently of each other (De Kuthy 2002) as we have seen in (25).

<sup>5</sup> tagesschau, 15.10.2002, 20:00.

The second implication is problematic because of examples such as (28):

(28) a. [Trocken] dry [durch through die the Stadt] city kommt comes man one am at.the Wochenende weekend auch also mit with der the BVG.<sup>6</sup> BVG

'With the BVG, you can be sure to get around town dry at the weekend.'

b. [Wenig] little [mit with Sprachgeschichte] language.history hat has der the dritte third Beitrag contribution in in dieser this Rubrik section zu tun, […]<sup>7</sup>

to do

'The third contribution in this section has little to do with language history.'

In (28), there are multiple constituents preceding the finite verb, which bear no obvious syntactic or semantic relation to each other. Exactly what is meant by a "syntactic or semantic relation" will be fully explained in the following chapters. At this point, I will just point out that in (28a) the adjective *trocken* 'dry' has *man* 'one' as its subject and furthermore says something about the action of 'travelling through the city'. That is, it refers to the action denoted by the verb. As (29b) shows, *durch die Stadt* 'through the city' cannot be combined with the adjective *trocken* 'dry'.

	- b. \* Man one ist is / bleibt stays trocken dry durch through die the Stadt. city

Therefore, the adjective *trocken* 'dry' does not have a syntactic or semantic relationship with the prepositional phrase *durch die Stadt* 'through the city'. Both phrases have in common that they refer to the verb and are dependent on it.

One may simply wish to treat the examples in (28) as exceptions. This approach would, however, not be justified, as I have shown in an extensive empirical study (Müller 2003a).

If one were to classify *trocken durch die Stadt* as a constituent due to it passing the fronting test, then one would have to assume that *trocken durch die Stadt* in (30) is also a constituent. In doing so, we would devalue the term *constituent* as the whole point of constituent tests is to find out which word strings have some semantic or syntactic relationship.<sup>8</sup>

<sup>6</sup> taz berlin, 10.07.1998, p. 22.

<sup>7</sup> Zeitschrift für Dialektologie und Linguistik, LXIX, 3/2002, p. 339.

<sup>8</sup> These data can be explained by assuming a silent verbal head preceding the finite verb and thereby ensuring that there is in fact just one constituent in initial position in front of the finite verb (Müller 2005c, 2023a). Nevertheless, this kind of data are problematic for constituent tests since these tests have been specifically designed to tease apart whether strings such as *trocken* and *durch die Stadt* or *wenig* and *mit Sprachgeschichte* in (30) form a constituent.

(30) a. Man one kommt comes am at.the Wochenende weekend auch also mit with der the BVG BVG trocken dry durch through die the Stadt. city

'With the BVG, you can be sure to get around town dry at the weekend.'

b. Der the dritte third Beitrag contribution in in dieser this Rubrik section hat has wenig little mit with Sprachgeschichte language.history zu to tun. do

'The third contribution in this section has little to do with language history.'

The possibility for a given sequence of words to be fronted is therefore not a sufficient diagnostic for constituent status.

We have also seen that it makes sense to treat expletives as constituents despite the fact that the accusative expletive cannot be fronted (cf. (20a)):

	- b. # Es it bringt brings er he bis until zum to.the Professor. professor

There are other elements that can also not be fronted. Inherent reflexives are a good example of this:

	- b. \* Sich refl hat has Karl Karl nicht not erholt. recovered

It follows from this that fronting is not a necessary criterion for constituent status. Therefore, the possibility for a given word string to be fronted is neither a necessary nor sufficient condition for constituent status.

### **1.3.2.4 Coordination**

Coordinated structures such as those in (33) also prove to be problematic:

(33) Deshalb therefore kaufte bought der the Mann man einen a Esel donkey und and die the Frau woman ein a Pferd. horse 'Therefore, the man bought a donkey and the woman a horse.'

At first glance, *der Mann einen Esel* 'the man a donkey' and *die Frau ein Pferd* 'the woman a horse' in (33) seem to be coordinated. Does this mean that *der Mann einen Esel* and *die Frau ein Pferd* each form a constituent?

As other constituent tests show, this assumption is not plausible. This sequence of words cannot be moved together as a unit:<sup>9</sup>

(34) \* Der the Mann man einen a Esel donkey kaufte bought deshalb. therefore

Replacing the supposed constituent is also not possible without ellipsis:

(35) a. # Deshalb therefore kaufte bought er. he

> b. \* Deshalb therefore kaufte bought ihn. him

The pronouns do not stand in for the two logical arguments of *kaufen* 'to buy', which are realized by *der Mann* 'the man' and *einen Esel* 'a donkey' in (33), but rather for one in each. There are analyses that have been proposed for examples such as (33) in which two verbs *kauft* 'buys' occur, where only one is overt, however (Crysmann 2008). The example in (33) would therefore correspond to:

(36) Deshalb therefore kaufte bought der the Mann man einen a Esel donkey und and kaufte bought die the Frau woman ein a Pferd. horse

This means that although it seems as though *der Mann einen Esel* 'the man a donkey' and *die Frau ein Pferd* 'the woman a horse' are coordinated, it is actually *kauft der Mann einen Esel* 'buys the man a donkey' and *(kauft) die Frau ein Pferd* 'buys the woman a horse' which are conjoined.

We should take the following from the previous discussion: even when a given word sequence passes certain constituent tests, this does not mean that one can automatically infer from this that we are dealing with a constituent. That is, the tests we have seen are not sufficient conditions for constituent status.

Summing up, it has been shown that these tests are neither sufficient nor necessary for attributing constituent status to a given sequence of words. However, as long as one keeps the problematic cases in mind, the previous discussion should be enough to get an initial idea about what should be treated as a constituent.

# **1.4 Parts of speech**

The words in (37) differ not only in their meaning but also in other respects.

(37) Der the große big Biber beaver schwimmt swims jetzt. now 'The big beaver swims now.'

<sup>9</sup> The area in front of the finite verb is also referred to as the *Vorfeld* 'prefield' (see Section 1.8). Apparent multiple fronting is possible under certain circumstances in German. See the previous section, especially the discussion of the examples in (28) on page 15. The example in (34) is created in such a way that the subject is present in the prefield, which is not normally possible with verbs such as *kaufen* 'to buy' for reasons which have to do with the information-structural properties of these kinds of fronting constructions. Compare also De Kuthy & Meurers 2003b on subjects in fronted verb phrases and Bildhauer & Cook 2010: 72 on frontings of subjects in apparent multiple frontings.

Each of the words is subject to certain restrictions when forming sentences. It is common practice to group words into classes with other words which share certain salient properties. For example, *der* 'the' is an article, *Biber* 'beaver' is a noun, *schwimmt* 'swims' is a verb and *jetzt* 'now' is an adverb. As can be seen in (38), it is possible to replace all the words in (37) with words from the same word class.

(38) Die the kleine small Raupe caterpillar frisst eats immer. always 'The small caterpillar is always eating.'

This is not always the case, however. For example, it is not possible to use a verb such as *verschlingt* 'devours' or the second-person form *schwimmst* in (38). This means that the categorization of words into parts of speech is rather coarse and that we will have to say a lot more about the properties of a given word. In this section, I will discuss various word classes/parts of speech and in the following sections I will go into further detail about the various properties which characterize a given word class.

The most important parts of speech are *verbs*, *nouns*, *adjectives*, *prepositions* and *adverbs*. In earlier decades, it was common among researchers working on German (see also Section 11.6.1 on Tesnière's category system) to speak of *action words*, *describing words*, and *naming words*. These descriptions prove problematic, however, as illustrated by the following examples:

	- b. die the *Stunde* hour
	- c. das the laute loud *Sprechen* speaking '(the act of) speaking loudly'
	- d. Die the *Erörterung* discussion der of.the Lage situation dauerte lasted mehrere several Stunden. hours 'The discussion of the situation lasted several hours.'

(39a) does not describe a concrete entity, (39b) describes a time interval and (39c) and (39d) describe actions. It is clear that *Idee* 'idea', *Stunde* 'hour', *Sprechen* 'speaking' and *Erörterung* 'discussion' differ greatly in terms of their meaning. Nevertheless, these words still behave like *Raupe* 'caterpillar' and *Biber* 'beaver' in many respects and are therefore classed as nouns.

The term *action word* is not used in scientific linguistic work as verbs do not always need to denote actions:

(40) a. Ihm him gefällt pleases das the Buch. book 'He likes the book.'


One would also have to class the noun *Erörterung* 'discussion' as an action word.

Adjectives do not always describe properties of objects. In the following examples, the opposite is in fact true: the characteristic of being a murderer is expressed as being possible or probable, but not as being true properties of the modified noun.

	- b. Soldaten soldiers sind are potenzielle potential Mörder. murderers

The adjectives themselves in (41) do not actually provide any information about the characteristics of the entities described. One may also wish to classify *lachende* 'laughing' in (42) as an adjective.

(42) der the lachende laughing Mann man

If, however, we are using properties and actions as our criteria for classification, *lachend* 'laughing' should technically be an action word.

Rather than semantic criteria, it is usually formal criteria which are used to determine word classes. The various forms a word can take are also taken into account. So *lacht* 'laughs', for example, has the forms given in (43).

	- I laugh
	- b. Du you.sg lachst. laugh
	- c. Er he lacht. laughs
	- d. Wir we lachen. laugh
	- e. Ihr you.pl lacht. laugh
	- f. Sie they lachen. laugh

In German, there are also forms for the preterite, imperative, present subjunctive, past subjunctive and non-finite forms (participles and infinitives with or without *zu* 'to'). All

#### 1 Introduction and basic terms

of these forms constitute the inflectional paradigm of a verb. Tense (present, preterite, future), mood (indicative, subjunctive, imperative), person (1st, 2nd, 3rd) and number (singular, plural) all play a role in the inflectional paradigm. Certain forms can coincide in a paradigm, as (43c) and (43e) and (43d) and (43f) show.

Parallel to verbs, nouns also have an inflectional paradigm:

	- b. des the.gen Mannes man.gen
	- c. dem the.dat Mann man
	- d. den the.acc Mann man
	- e. die the.nom Männer men
	- f. der the.gen Männer men
	- g. den the.dat Männern men.dat
	- h. die the.acc Männer men

We can differentiate between nouns on the basis of gender (feminine, masculine, neuter). The choice of gender is often purely formal in nature and is only partially influenced by biological sex or the fact that we are describing a particular object:

	- b. der the.m Krampf cramp(M) 'cramp'
	- c. das the.n Kind child(N) 'the child'

As well as gender, case (nominative, genitive, dative, accusative) and number are also important for nominal paradigms.

Like nouns, adjectives inflect for gender, case and number. They differ from nouns, however, in that gender marking is variable. Adjectives can be used with all three genders:

	- b. ein a schöner beautiful.m Strauß bunch
	- c. ein a schönes beautiful.n Bouquet bouquet

In addition to gender, case and number, we can identify several inflectional classes. Traditionally, we distinguish between strong, mixed and weak inflection of adjectives. The inflectional class that we have to choose is dependent on the form or presence of the article:

	- the old wine
	- c. alter old Wein wine

Furthermore, adjectives have comparative and superlative wordforms:

	- b. klüg-er clever-er
	- c. am at.the klüg-sten clever-est

This is not always the case. Especially for adjectives which make reference to some end point, a degree of comparison does not make sense. If a particular solution is optimal, for example, then no better one exists. Therefore, it does not make sense to speak of a "more optimal" solution. In a similar vein, it is not possible to be "deader" than dead.

There are some special cases such as color adjectives ending in -*a* in German *lila* 'purple' and *rosa* 'pink'. These inflect optionally (49a), and the uninflected form is also possible:

	- a lila purple Blume flower

In both cases, *lila* is classed an adjective. We can motivate this classification by appealing to the fact that both words occur at the same positions as other adjectives that clearly behave like adjectives with regard to inflection.

The parts of speech discussed thus far can all be differentiated in terms of their inflectional properties. For words which do not inflect, we have to use additional criteria. For example, we can classify words by the syntactic context in which they occur (as we did for the non-inflecting adjectives above). We can identify prepositions, adverbs, conjunctions, interjections and sometimes also particles. Prepositions are words which occur with a noun phrase whose case they determine:

	- b. in in diesem this.dat Raum room

*wegen* 'because' is often classed as a preposition although it can also occur after the noun and in these cases would technically be a postposition:

(51) des the Geldes money.gen wegen because 'because of the money'

It is also possible to speak of *adpositions* if one wishes to remain neutral about the exact position of the word.

Unlike prepositions, adverbs do not require a noun phrase.

	- b. Er he schläft sleeps dort. there

Sometimes adverbs are simply treated as a special variant of prepositions (see page 94). The explanation for this is that a prepositional phrase such as *in diesem Raum* 'in this room' shows the same syntactic distribution as the corresponding adverbs. *in* differs from *dort* 'there' in that it needs an additional noun phrase. These differences are parallel to what we have seen with other parts of speech. For instance, the verb *schlafen* 'sleep' requires only a noun phrase, whereas *erkennen* 'recognize' requires two.

	- b. Peter Peter erkennt recognizes ihn. him

Conjunctions can be subdivided into subordinating and coordinating conjunctions. Coordinating conjunctions include *und* 'and' and *oder* 'or'. In coordinate structures, two units with the same syntactic properties are combined. They occur adjacent to one another. *dass* 'that' and *weil* 'because' are subordinating conjunctions because the clauses that they introduce can be part of a larger clause and depend on another element of this larger clause.

	- b. Klaus Klaus glaubt believes ihm him nicht, not weil because er he lügt. lies 'Klaus doesn't believe him because he is lying.'

Interjections are clause-like expressions such as *Ja!* 'Yes!', *Bitte!* 'Please!' *Hallo!* 'Hello!', *Hurra!* 'Hooray!', *Bravo!* 'Bravo!', *Pst!* 'Psst!', *Plumps!* 'Clonk!'.

If adverbs and prepositions are not assigned to the same class, then adverbs are normally used as a kind of "left over" category in the sense that all non-inflecting words which are neither prepositions, conjunctions nor interjections are classed as adverbs. Sometimes this category for "left overs" is subdivided: only words which can appear in front of the finite verb when used as a constituent are referred to as adverbs. Those words which cannot be fronted are dubbed *particles*. Particles themselves can be subdivided into various classes based on their function, e.g., degree particles and illocutionary particles. Since these functionally defined classes also contain adjectives, I will not make this distinction and simply speak of *adverbs*.

We have already sorted a considerable number of inflectional words into word classes. When one is faced with the task of classifying a particular word, one can use the decision diagram in Figure 1.2, which is taken from the Duden grammar of German (Eisenberg et al. 2005: 133).<sup>10</sup>

Figure 1.2: Decision tree for determining parts of speech following Eisenberg et al. (2005: 133)

<sup>10</sup>The Duden is the official document for the German orthography. The Duden grammar does not have an official status but is very influential and is used for educational purposes as well. I will refer to it several times in this introductory chapter.

If a word inflects for tense, then it is a verb. If it displays different case forms, then one has to check if it has a fixed gender. If this is indeed the case, then we know that we are dealing with a noun. Words with variable gender have to be checked to see if they have comparative forms. A positive result will be a clear indication of an adjective. All other words are placed into a residual category, which the Duden refers to as pronouns/article words. Like in the class of non-inflectional elements, the elements in this remnant category are subdivided according to their syntactic behavior. The Duden grammar makes a distinction between pronouns and article words. According to this classification, pronouns are words which can replace a noun phrase such as *der Mann* 'the man', whereas article words normally combine with a noun. In Latin grammars, the notion of 'pronoun' includes both pronouns in the above sense and articles, since the forms with and without the noun are identical. Over the past centuries, the forms have undergone split development to the point where it is now common in contemporary Romance languages to distinguish between words which replace a noun phrase and those which must occur with a noun. Elements which belong to the latter class are also referred to as *determiners*.

If we follow the decision tree in Figure 1.2, the personal pronouns *ich* 'I', *du* 'you', *er* 'he', *sie* 'her', *es* 'it', *wir* 'we', *ihr* 'you', and *sie* 'they', for example, would be grouped together with the possessive pronouns *mein* 'mine', *dein* 'your', *sein* 'his'/'its', *ihr* 'her'/ 'their', *unser* 'our', and *euer* 'your'. The corresponding reflexive pronouns, *mich* 'myself', *dich* 'yourself', *sich* 'himself'/'herself'/'itself', 'themselves', *uns* 'ourselves', *euch* 'yourself', and the reciprocal pronoun *einander* 'each other' have to be viewed as a special case in German as there are no differing gender forms of *sich* 'himself'/'herself'/'itself' and *einander* 'each other'. Case is not expressed morphologically by reciprocal pronouns. By replacing genitive, dative and accusative pronouns with *einander*, it is possible to see that there must be variants of *einander* 'each other' in these cases, but these variants all share the same form:

	- b. Sie they helfen help ihm him.dat / einander. each.other
	- c. Sie they lieben love ihn him.acc / einander. each.other

So-called pronominal adverbs such as *darauf* 'on there', *darin* 'in there', *worauf* 'on where', *worin* 'in where' also prove problematic. These forms consist of a preposition (e.g., *auf* 'on') and the elements *da* 'there' and *wo* 'where'. As the name suggests, *pronominal adverbs* contain something pronominal and this can only be *da* 'there' and *wo* 'where'. However, *da* 'there' and *wo* 'where' do not inflect and would therefore, following the decision tree, not be classed as pronouns.

The same is true of relative pronouns such as *wo* 'where' in (56):

(56) a. Ich I komme come eben part aus from der the Stadt, city *wo* where ich I Zeuge witness eines of.an Unglücks accident gewesen been

bin.<sup>11</sup> am

'I come from the city where I was witness to an accident.'

b. Studien studies haben have gezeigt, shown daß that mehr more Unfälle accidents in in Städten cities passieren, happen *wo* where die the Zebrastreifen zebra.crossings abgebaut removed werden, become weil because die the Autofahrer drivers unaufmerksam unattentive werden.<sup>12</sup>

become

'Studies have shown that there are more accidents in cities where they do away with zebra crossings, because drivers become unattentive.'

c. Zufällig coincidentally war was ich I in in dem the Augenblick moment zugegen, present *wo* where der the Steppenwolf Steppenwolf zum to.the erstenmal first.time unser our Haus house betrat entered und and bei by meiner my Tante aunt sich refl einmietete.<sup>13</sup> took.lodgings 'Coincidentally, I was present at the exact moment in which Steppenwolf entered our house for the first time and took lodgings with my aunt.'

If they are uninflected, then they cannot belong to the class of pronouns according to the decision tree above. Eisenberg (2004: 277) notes that *wo* 'where' is a kind of *uninflected relative pronoun* (he uses quotation marks) and remarks that this term runs contrary to the exclusive use of the term pronoun for nominal, that is, inflected, elements. He therefore uses the term *relative adverb* for them (see also Eisenberg et al. (2005: §856, §857)).

There are also usages of the relatives *dessen* 'whose' and *wessen* 'whose' in combination with a noun:

	- b. Ich möchte wissen, wessen Schwester du kennst.
		- I would.like know whose sister you know
		- 'I would like to know whose sister you know.'

According to the classification in the Duden, these should be covered by the terms *Relativartikelwort* 'relative article word' and *Interrogativartikelwort* 'interrogative article word'. They are mostly counted as part of the relative pronouns and question pronouns (see for instance Eisenberg (2004: 229)). Using Eisenberg's terminology, this is unproblematic as he does not make a distinction between articles, pronouns and nouns, but rather assigns them all to the class of nouns. But authors who do make a distinction between articles and pronouns sometimes also speak of interrogative pronouns when discussing words which can function as articles or indeed replace an entire noun phrase.

<sup>11</sup>Drosdowski (1984: 672).

<sup>12</sup>taz berlin, 03.11.1997, p. 23.

<sup>13</sup>Herman Hesse, *Der Steppenwolf*. Berlin und Weimar: Aufbau-Verlag. 1986, p. 6.

One should be prepared for the fact that the term *pronoun* is often simply used for words which refer to other entities and, this is important, not in the way that nouns such as *book* and *John* do, but rather dependent on context. The personal pronoun *er* 'he' can, for example, refer to either a table or a man. This usage of the term *pronoun* runs contrary to the decision tree in Figure 1.2 and includes uninflected elements such as *da* 'there' and *wo* 'where'.

Expletive pronouns such as *es* 'it' and *das* 'that', as well as the *sich* 'him'/'her'/'itself' belonging to inherently reflexive verbs, do not make reference to actual objects. They are considered pronouns because of the similarity in form. Even if we were to assume a narrow definition of pronouns, we would still get the wrong results as expletive forms do not vary with regard to case, gender and number. If one does everything by the book, expletives would belong to the class of uninflected elements. If we assume that *es* 'it' as well as the personal pronouns have a nominative and accusative variant with the same form, then they would be placed in with the nominals. We would then have to admit that the assumption that *es* has gender would not make sense. That is we would have to count *es* as a noun by assuming neuter gender, analogous to personal pronouns.

We have not yet discussed how we would deal with the italicized words in (58):

	- b. das the *schlafende* sleeping Kind child
	- c. die the Frage question des of.the *Sprechens* talking und and *Schreibens* writing über about Gefühle feelings 'the question of talking and writing about feelings'
	- d. Auf on dem the Europa-Parteitag Europe-party.conference fordern demand die the *Grünen* Greens einen a ökosozialen eco-social Politikwechsel.

political.change

'At the European party conference, the Greens demanded eco-social political change.'


*geliebte* 'beloved' and *schlafende* 'sleeping' are participle forms of *lieben* 'to love' and *schlafen* 'to sleep'. These forms are traditionally treated as part of the verbal paradigm. In this sense, *geliebte* and *schlafende* are verbs. This is referred to as lexical word class. The term *lexeme* is relevant in this case. All forms in a given inflectional paradigm belong to the relevant lexeme. In the classic sense, this term also includes the regularly derived forms. That is participle forms and nominalized infinitives also belong to a verbal lexeme. Not all linguists share this view, however. Particularly problematic is the fact that we are mixing verbal with nominal and adjectival paradigms. For example, *Sprechens*

'speaking.gen' is in the genitive case and adjectival participles also inflect for case, number and gender. Furthermore, it is unclear as to why *schlafende* 'sleeping' should be classed as a verbal lexeme and a noun such as *Störung* 'disturbance' is its own lexeme and does not belong to the lexeme *stören* 'to disturb'. I subscribe to the more modern view of grammar and assume that processes in which a word class is changed result in a new lexeme being created. Consequently, *schlafende* 'sleeping' does not belong to the lexeme *schlafen* 'to sleep', but is a form of the lexeme *schlafend*. This lexeme belongs to the word class 'adjective' and inflects accordingly.

As we have seen, it is still controversial as to where to draw the line between inflection and derivation (creation of a new lexeme). Sag, Wasow & Bender (2003: 263–264) view the formation of the present participle (*standing*) and the past participle (*eaten*) in English as derivation as these forms inflect for gender and number in French.

Adjectives such as *Grünen* 'the Greens' in (58d) are nominalized adjectives and are written with a capital like other nouns in German when there is no other noun that can be inferred from the immediate context:

	- B: Nein, no gib give mir me bitte please den the grünen. green 'No, give me the green one, please.'

In the answer to (59), the noun *Ball* has been omitted. This kind of omission is not present in (58d). One could also assume here that a word class change has taken place. If a word changes its class without combination with a visible affix, we refer to this as *conversion*. Conversion has been treated as a sub-case of derivation by some linguists. The problem is, however, that *Grüne* 'greens' inflects just like an adjective and the gender varies depending on the object it is referring to:

	- a green.m has suggested

'A (male) member of the Green Party suggested …'

	- a green.f has suggested
	- 'A (female) member of the Green Party suggested …'

We also have the situation where a word has two properties. We can make life easier for ourselves by talking about *nominalized adjectives*. The lexical category of *Grüne* is adjective and its syntactic category is noun.

The word in (58e) can inflect like an adjective and should therefore be classed as an adjective following our tests. Sometimes, these kinds of adjectives are also classed as adverbs. The reason for this is that the uninflected forms of these adjectives behave like adverbs:

(61) Max Max lacht laughs immer always / oft often / laut. loud 'Max (always/often) laughs (loudly).'

To capture this dual nature of words some researchers distinguish between lexical and syntactic category of words. The lexical category of *laut* 'loud(ly)' is that of an adjective and the syntactic category to which it belongs is 'adverb'. The classification of adjectives such as *laut* 'loud(ly)' in (61) as adverbs is not assumed by all authors. Instead, some speak of adverbial usage of an adjective, that is, one assumes that the syntactic category is still adjective but it can be used in a different way so that it behaves like an adverb (see Eisenberg 2004: Section 7.3, for example). This is parallel to prepositions, which can occur in a variety of syntactic contexts:

	- b. der the Tisch table im in.the Büro office 'the table in the office'

We have prepositional phrases in both examples in (62); however, in (62a) *im Büro* 'in the office' acts like an adverb in that it modifies the verb *schläft* 'sleeps' and in (62b) *im Büro* modifies the noun *Tisch* 'table'. In the same way, *laut* 'loud' can modify a noun (63) or a verb (61).

(63) die the laute loud Musik music

# **1.5 Heads**

The head of a constituent/phrase is the element which determines the most important properties of the constituent/phrase. At the same time, the head also determines the composition of the phrase. That is, the head requires certain other elements to be present in the phrase. The heads in the following examples have been marked in *italics*:

	- b. *Erwartet* expects er he.nom diesen this.acc Mann? man 'Is he expecting this man?'
	- c. *Hilft* helps er he.nom diesem this.dat Mann? man 'Is he helping this man?'

Verbs determine the case of their arguments (subjects and objects). In (64d), the preposition determines which case the noun phrase *diesem Haus* 'this house' bears (dative) and also determines the semantic contribution of the phrase (it describes a location). (64e) is controversial: there are linguists who believe that the determiner is the head (Ajdukiewicz 1935: 6, Vennemann & Harlow 1977, Brame 1982, Hudson 1984: 90–92, Hellan 1986, Abney 1987, Netter 1994, 1998) while others assume that the noun is the head of the phrase (Van Langendonck 1994, Pollard & Sag 1994: 49, Demske 2001, Müller 2007a: Section 6.6.1, Hudson 2004, Bruening 2009).

The combination of a head with another constituent is called a *projection of the head*. A projection which contains all the necessary parts to create a well-formed phrase of that type is a *maximal projection*. A sentence is the maximal projection of a finite verb.

Figure 1.3 shows the structure of (65) in box representation.

(65) Der the Mann man liest reads einen an Aufsatz. essay 'The man is reading an essay.'

Unlike Figure 1.1, the boxes have been labelled here.

Figure 1.3: Words and phrases in annotated boxes

The annotation includes the category of the most important element in the box. VP stands for *verb phrase* and NP for *noun phrase*. VP and NP are maximal projections of their respective heads.

Anyone who has ever faced the hopeless task of trying to find particular photos of their sister's wedding in a jumbled, unsorted cupboard can vouch for the fact that it is most definitely a good idea to mark the boxes based on their content and also mark the albums based on the kinds of photos they contain.

An interesting point is that the exact content of the box with linguistic material does not play a role when the box is put into a larger box. It is possible, for example, to replace the noun phrase *der Mann* 'the man' with *er* 'he', or indeed the more complex *der Mann* *aus Stuttgart, der das Seminar zur Entwicklung der Zebrafinken besucht* 'the man from Stuttgart who takes part in the seminar on the development of zebra finches'. However, it is not possible to use *die Männer* 'the men' or *des Mannes* 'of the man' in this position:

	- b. \* Des of.the Mannes man.gen liest reads einen an Aufsatz. essay

The reason for this is that *die Männer* 'the men' is in plural and the verb *liest* 'reads' is in singular. The noun phrase bearing genitive case *des Mannes* can also not occur, only nouns in the nominative case. It is therefore important to mark all boxes with the information that is important for placing these boxes into larger boxes. Figure 1.4 shows our example with more detailed annotation.

Figure 1.4: Words and word strings in annotated boxes

The features of a head which are relevant for determining in which contexts a phrase can occur are called *head features*. The features are said to be *projected* by the head.

# **1.6 Arguments and adjuncts**

The constituents of a given clause have different relations to their head. It is typical to distinguish between arguments and adjuncts. The syntactic arguments of a head correspond for the most part to their logical arguments. We can represent the meaning of (67a) as (67b) using predicate logic.

	- b. *help*′ (*peter*′ , *maria*′ )

The logical representation of (67b) resembles what is expressed in (67a); however, it abstracts away from constituent order and inflection. *Peter* and *Maria* are syntactic arguments of the verb *help* and their respective meanings (*Peter*′ and *Maria*′ ) are arguments of the logical relation expressed by *help*′ . One could also say that *help* assigns semantic roles to its arguments. Semantic roles include agent (the person carrying out an action), patient (the affected person or thing), beneficiary (the person who receives something) and experiencer (the person experiencing a psychological state). The subject of *help* is

an agent and the direct object is a beneficiary. Arguments which fulfil a semantic role are also called *actants*. This term is also used for inanimate objects.

This kind of relation between a head and its arguments is covered by the terms *selection* and *valence*. Valence is a term borrowed from chemistry. Atoms can combine with other atoms to form molecules with varying levels of stability. The way in which the electron shells are occupied plays an important role for this stability. If an atom combines with others atoms so that its electron shell is fully occupied, then this will lead to a stable connection. Valence tells us something about the number of hydrogen atoms which an atom of a certain element can be combined with. In forming H2O, oxygen has a valence of 2. We can divide elements into valence classes. Following Mendeleev, elements with a particular valence are listed in the same column in the periodic table.

The concept of valence was applied to linguistics by Tesnière (1959): a head needs certain arguments in order to form a stable compound. Words with the same valence – that is which require the same number and type of arguments – are divided into valence classes. Figure 1.5 shows examples from chemistry as well as linguistics.

### Figure 1.5: Combination of hydrogen and oxygen and the combination of a verb with its arguments

I used (67) to explain logical valence. Logical valence can, however, sometimes differ from syntactic valence. This is the case with verbs like *rain*, which require an expletive pronoun as an argument. Inherently reflexive verbs such as *sich erholen* 'to recover' in German are another example.

(68) a. Es it regnet. rains 'It is raining.' b. Klaus Klaus erholt recovers sich. refl 'Klaus is recovering.'

The expletive *es* 'it' with weather verbs and the *sich* of so-called inherent reflexives such as *erholen* 'to recover' have to be present in the sentence. Germanic languages have expletive elements that are used to fill the position preceding the finite verb. These positional expletives are not realized in embedded clauses in German, since embedded clauses have a structure that differs from canonical unembedded declarative clauses, which have the finite verb in second position. (69a) shows that *es* cannot be omitted in *dass*-clauses.

	- b. \* Ich I glaube, believe dass that Klaus Klaus erholt. recovers Intended: 'I believe that Klaus is recovering.'

Neither the expletive nor the reflexive pronoun contributes anything semantically to the sentence. They must, however, be present to derive a complete, well-formed sentence. They therefore form part of the valence of the verb.

Constituents which do not contribute to the central meaning of their head, but rather provide additional information are called *adjuncts*. An example is the adverb *deeply* in (70):

(70) Kim loves Sandy deeply.

This says something about the intensity of the relation described by the verb. Further examples of adjuncts are attributive adjectives (71a) and relative clauses (71b):

	- b. the squirrel *who Kim feeds*

Adjuncts have the following syntactic/semantic properties:

	- b. Adjuncts are optional.
	- c. Adjuncts can be iterated.

The phrase in (71a) can be extended by adding another adjunct:

(73) a big grey squirrel

If one puts processing problems aside for a moment, this kind of extension by adding adjectives could proceed infinitely (see the discussion of (38) on page 65). Arguments, on the other hand, cannot be realized more than once:

(74) \* The man the boy sleeps.

If the entity carrying out the sleeping action has already been mentioned, then it is not possible to have another noun phrase which refers to a sleeping individual. If one wants to express the fact that more than one individual is sleeping, this must be done by means of coordination as in (75):

(75) The man and the boy are sleeping.

One should note that the criteria for identifying adjuncts proposed in (72) is not sufficient, since there are also syntactic arguments that do not fill semantic roles (e.g., *es* 'it' in (68a) and *sich* (refl) in (68b)) or are optional as *pizza* in (76).

(76) Tony is eating (pizza).

Heads normally determine the syntactic properties of their arguments in a relatively fixed way. A verb is responsible for the case which its arguments bear.

	- b. \* Er he gedenkt remembers dem the.dat Opfer. victim
	- c. Er he hilft helps dem the.dat Opfer. victim 'He helps the victim.'
	- d. \* Er he hilft helps des the.gen Opfers. victim.gen

The verb *governs* the case of its arguments.

The preposition and the case of the noun phrase in the prepositional phrase are both determined by the verb:<sup>14</sup>

	- b. # Er He denkt thinks an on seiner his.dat Modelleisenbahn. model.railway
	- c. Er He hängt hangs an on seiner his.dat Modelleisenbahn. model.railway 'He clings to his model railway.'
	- d. \* Er he hängt hangs an on seine his.acc Modelleisenbahn. model.railway

The case of noun phrases in modifying prepositional phrases, on the other hand, depends on their meaning. In German, directional prepositional phrases normally require a noun phrase bearing accusative case (79a), whereas local PPs (denoting a fixed location) appear in the dative case (79b):

(79) a. Er he geht goes in in die the.acc Schule school / auf on den the.acc Weihnachtsmarkt Christmas.market / unter under die the.acc Brücke. bridge

'He is going to school/to the Christmas market/under the bridge.'

b. Er he schläft sleeps in in der the.dat Schule school / auf on dem the.dat Weihnachtsmarkt Christmas.market / unter under der the.dat Brücke.

bridge

'He is sleeping at school/at the Christmas market/under the bridge.'

<sup>14</sup>For similar examples, see Eisenberg (1994b: 78).

An interesting case is the verb *sich befinden* 'to be located', which expresses the location of something. This cannot occur without some information about the location pertaining to the verb:

(80) \* Wir we befinden are.located uns. refl

The exact form of this information is not fixed – neither the syntactic category nor the preposition inside of prepositional phrases is restricted:

(81) Wir we befinden are.located uns refl hier here / unter under der the Brücke bridge / neben next.to dem the Eingang entrance / im in Bett. bed 'We are here/under the bridge/next to the entrance/in bed.'

Local modifiers such as *hier* 'here' or *unter der Brücke* 'under the bridge' are analyzed with regard to other verbs (e.g., *schlafen* 'sleep') as adjuncts. For verbs such as *sich befinden* 'to be (located)', we will most likely have to assume that information about location forms an obligatory syntactic argument of the verb.

The verb selects a phrase with information about location, but does not place any syntactic restrictions on its type. This specification of location behaves semantically like the other adjuncts we have seen previously. If I just consider the semantic aspects of the combination of a head and adjunct, then I also refer to the adjunct as a *modifier*. 15 Arguments specifying location with verbs such as *sich befinden* 'to be located' are also subsumed under the term *modifier*. Modifiers are normally adjuncts, and therefore optional, whereas in the case of *sich befinden* they seem to be (obligatory) arguments.

In conclusion, we can say that constituents that are required to occur with a certain head are arguments of that head. Furthermore, constituents which fulfil a semantic role with regard to the head are also arguments. These kinds of arguments can, however, sometimes be optional.

Arguments are normally divided into subjects and complements.<sup>16</sup> Not all heads require a subject (see Müller 2007a: Section 3.2). The number of arguments of a head can therefore also correspond to the number of complements of a head.

# **1.7 Grammatical functions**

In some theories, grammatical functions such as subject and object form part of the formal description of language (see Chapter 7 on Lexical Functional Grammar, for example). This is not the case for the majority of the theories discussed here, but these terms are used for the informal description of certain phenomena. For this reason, I will briefly discuss them in what follows.

<sup>15</sup>See Section 1.7.2 for more on the grammatical function of adverbials. The term adverbial is normally used in conjunction with verbs. *modifier* is a more general term, which normally includes attributive adjectives.

<sup>16</sup>In some schools the term complement is understood to include the subject, that is, the term complement is equivalent to the term argument (see for instance Groß 2003: 342). Some researchers treat some subjects, e.g., those of finite verbs, as complements (Pollard 1996; Eisenberg 1994a: 376).

# **1.7.1 Subjects**

Although I assume that the reader has a clear intuition about what a subject is, it is by no means a trivial matter to arrive at a definition of the word *subject* which can be used cross-linguistically. For German, Reis (1982) suggested the following syntactic properties as definitional for subjects:


I have already discussed agreement in conjunction with the examples in (4). Reis (1982) argues that the second bullet point is a suitable criterion for German. She formulates a restriction to non-copular clause because there can be more than one nominative argument in sentences with predicate nominals such as (82):

	- b. Er he.nom wurde was ein a Lügner liar.nom genannt. called 'He was called a liar.'

Following this criterion, arguments in the dative case such as *den Männern* 'the men' cannot be classed as subjects in German:

	- b. Den the.dat Männern men.dat wurde were.3SG geholfen. helped 'The men were helped.'

Following the other criteria, datives should also not be classed as subjects – as Reis (1982) has shown. In (83b), *wurde*, which is the 3rd person singular form, does not agree with *den Männern*. The third of the aforementioned criteria deals with infinitive constructions such as those in (84):

(84) a. Klaus Klaus behauptet, claims den the.dat Männern men.dat zu to helfen. help 'Klaus claims to be helping the men.'


In the first sentence, an argument of the verb *helfen* 'to help' has been omitted. If one wishes to express it, then one would have to use the subordinate clause beginning with *dass* 'that' as in (84b). Examples (84c,d) show that infinitives which do not require a nominative argument cannot be embedded under verbs such as *behaupten* 'to claim'. If the dative noun phrase *den Männern* 'the men' were the subject in (83b), we would expect the control construction (84c) to be well-formed. This is, however, not the case. Instead of (84c), it is necessary to use (85):

(85) Die the Männer men.nom behaupten, claim dass that ihnen them.dat geholfen helped wird. aux 'The men claim that they are being helped.'

In the same way, imperatives are not possible with verbs that do not require a nominative. (86) shows some examples from Reis (1982: 186).

	- b. \* Graue dread nicht! not 'Don't dread it!'
	- c. Werd be einmal once unterstützt supported und and … 'Let someone support you for once and …'
	- d. \* Werd be einmal once geholfen helped und and … 'Let someone help you and …'

The verb *sich fürchten* 'to be scared' in (86a) obligatorily requires a nominative argument as its subject (87a). The similar verb *grauen* 'to dread' in (86b) takes a dative argument (87b).

(87) a. Ich I.nom fürchte be.scared mich refl vor before Spinnen. spiders 'I am scared of spiders.'

b. Mir me.dat graut dreads vor before Spinnen. spiders 'I am dreading spiders.'

Interestingly, dative arguments in Icelandic behave differently. Zaenen et al. (1985) discuss various characteristics of subjects in Icelandic and show that it makes sense to describe dative arguments as subjects in passive sentences even if the finite verb does not agree with them (Section 3.1) or they do not bear nominative case. An example of this is infinitive constructions with an omitted dative argument (p. 457):

	- b. Að to vera be hjálpað helped í on prófinu the.exam er is óleyfilegt. not.allowed 'It is not allowed for one to be helped during the exam.'

In a number of grammars, clausal arguments such as those in (89) are classed as subjects as they can be replaced by a noun phrase in the nominative (90) (see e.g., Eisenberg 2004: 63, 289).

	- b. Dass that er he Maria Maria geheiratet married hat, has gefällt pleases mir. me 'I'm glad that he married Maria.'
	- b. Das that gefällt pleases mir. me 'I like that.'

It should be noted that there are different opinions on the question of whether clausal arguments should be treated as subjects or not. As recent publications show, there is still some discussion in Lexical Function Grammar (see Chapter 7) (Dalrymple & Lødrup 2000, Berman 2003b, 2007, Alsina, Mohanan & Mohanan 2005, Forst 2006).

If we can be clear about what we want to view as a subject, then the definition of object is no longer difficult: objects are all other arguments whose form is directly determined by a given head. As well as clausal objects, German has genitive, dative, accusative and prepositional objects:

	- b. Sie they helfen help dem the.dat Mann. man.dat 'They are helping the man.'
	- c. Sie they kennen know den the.acc Mann. man.acc 'They know the man.'
	- d. Sie they denken think an on den the Mann. man 'They are thinking of the man.'

As well as defining objects by their case, it is commonplace to talk of *direct objects* and *indirect objects*. The direct object gets its name from the fact that – unlike the indirect object – the referent of a direct object is directly affected by the action denoted by the verb. With ditransitives such as the German *geben* 'to give', the accusative object is the direct object and the dative is the indirect object.

(92) dass that er he.nom dem the.dat Mann man.dat den the.acc Aufsatz essay.acc gibt gives 'that he gives the man the essay'

For trivalent verbs (verbs taking three arguments), we see that the verb can take either an object in the genitive case (93a) or, for verbs with a direct object in the accusative, a second accusative object (93b):

	- b. dass that er he den the.acc Mann man.acc den the.acc Vers verse.acc lehrte taught 'that he taught the man the verse'

These kinds of objects are sometimes also referred to as indirect objects.

Normally, only those objects which are promoted to subject in passives with *werden* 'to be' are classed as direct objects. This is important for theories such as LFG (see Chapter 7) since passivization is defined with reference to grammatical function. With two-place verbal predicates, the dative is not normally classed as a direct object (Cook 2006).

(94) dass that er he dem the.dat Mann man.dat hilft helps 'that he helps the man'

In many theories, grammatical function does not form a primitive component of the theory, but rather corresponds to positions in a tree structure. The direct object in German is therefore the object which is first combined with the verb in a configuration assumed to be the underlying structure of German sentences. The indirect object is the second object to be combined with the verb. On this view, the dative object of *helfen* 'to help' would have to be viewed as a direct object.

In the following, I will simply refer to the case of objects and avoid using the terms direct object and indirect object.

In the same way as with subjects, we consider whether there are object clauses which are equivalent to a certain case and can fill the respective grammatical function of a direct or indirect object. If we assume that *dass du sprichst* 'that you are speaking' in (95a) is a subject, then the subordinate clause must be a direct object in (95b):

(95) a. Dass that du you sprichst, speak wird is erwähnt. mentioned 'The fact that you're speaking is being mentioned.' b. Er he erwähnt, mentions dass that du you sprichst. speak 'He mentions that you are speaking.'

In this case, we cannot really view the subordinate clause as the accusative object since it does not bear case. However, we can replace the sentence with an accusative-marked noun phrase:

(96) Er he erwähnt mentions diesen this.acc Sachverhalt. matter 'He mentions this matter.'

If we want to avoid this discussion, we can simply call these arguments clausal objects.

## **1.7.2 The adverbial**

Adverbials differ semantically from subjects and objects. They tell us something about the conditions under which an action or process takes place, or the way in which a certain state persists. In the majority of cases, adverbials are adjuncts, but there are – as we have already seen – a number of heads which also require adverbials. Examples of these are verbs such as *to be located* or *to make one's way*. For *to be located*, it is necessary to specify a location and for *to proceed to* a direction is needed. These kinds of adverbials are therefore regarded as arguments of the verb.

The term *adverbial* comes from the fact that adverbials are often adverbs. This is not the only possibility, however. Adjectives, participles, prepositional phrases, noun phrases and even sentences can be adverbials:

(97) a. Er he arbeitet works sorgfältig. carefully


Although the noun phrase in (97d) bears accusative case, it is not an accusative object. *den ganzen Tag* 'the whole day' is a so-called temporal accusative. The occurrence of accusative in this case has to do with the syntactic and semantic function of the noun phrase, it is not determined by the verb. These kinds of accusatives can occur with a variety of verbs, even with verbs that do not normally require an accusative object:

	- b. Er he liest reads den the.acc ganzen whole.acc Tag day diesen this.acc schwierigen difficult.acc Aufsatz. essay 'He spends the whole day reading this difficult essay.'
	- c. Er he gibt gives den the.dat Armen poor.dat den the.acc ganzen whole.acc Tag day Suppe. soup 'He spends the whole day giving soup to the poor.'

The case of adverbials does not change under passivization:

	- b. \* weil because der the.nom ganze whole.nom Tag day gearbeitet worked wurde was

# **1.7.3 Predicatives**

Adjectives like those in (100a,b) as well as noun phrases such as *ein Lügner* 'a liar' in (100c) are counted as predicatives.

(100) a. Klaus Klaus ist is *klug*. clever


In the copula construction in (100a,c), the adjective *klug* 'clever' and the noun phrase *ein Lügner* 'a liar' is an argument of the copula *sein* 'to be' and the depictive adjective in (100b) is an adjunct to *isst* 'eats'.

For predicative noun phrases, case is not determined by the head but rather by some other element.<sup>17</sup> For example, the accusative in (101a) becomes nominative under passivization (101b):

	- b. Er he.nom wurde was ein a.nom Lügner liar genannt. called 'He was called a liar.'

Only *ihn* 'him' can be described as an object in (101a). In (101b), *ihn* becomes the subject and therefore bears nominative case. *einen Lügner* 'a liar' refers to *ihn* 'him' in (101a)

	- b. Der the wüste wild Kerl guy ist is ihr her.nom Komplize. accomplice
	- c. Laß let den the.acc wüsten wild.acc Kerl guy […] meinetwegen for.all.I.care ihr her.nom Komplize accomplice sein. be 'Let's assume that the wild guy is her accomplice, for all I care.' (Grebe & Gipper 1966: § 6925)
	- d. Baby, baby laß let mich me.acc dein your.nom Tanzpartner dancing.partner sein. be 'Baby, let me be your dancing partner!' (Funny van Dannen, Benno-Ohnesorg-Theater, Berlin, Volksbühne, 11.10.1995)
	- b. \* Er he lässt lets den the.acc lieben dear.acc Gott god 'n a frommer pious.nom Mann man sein. be

<sup>17</sup>There is some dialectal variation with regard to copula constructions: in Standard German, the case of the noun phrase with *sein* 'to be' is always nominative and does not change when embedded under *lassen* 'to let'. According to Drosdowski (1995: § 1259), in Switzerland the accusative form is common which one finds in examples such as (ii.a).

#### 1 Introduction and basic terms

and to *er* 'he' in (101b) and agrees in case with the noun over which it predicates. This is also referred to as *agreement case*.

For other predicative constructions see Eisenberg et al. (2005: § 1206) and Müller (2002a: Chapter 4, Chapter 5) and Müller (2008).

### **1.7.4 Valence classes**

It is possible to divide verbs into subclasses depending on how many arguments they require and on the properties these arguments are required to have. The classic division describes all verbs which have an object which becomes the subject under passivization as *transitive*. Examples of this are verbs such as *love* or *beat*. Intransitive verbs, on the other hand, are verbs which have either no object, or one that does not become the subject in passive sentences. Examples of this type of verb are *schlafen* 'to sleep', *helfen* 'to help', *gedenken* 'to remember'. A subclass of transitive verbs are ditransitive verbs such as *geben* 'to give' and *zeigen* 'to show'.

Unfortunately, this terminology is not always used consistently. Sometimes, two-place verbs with dative and genitive objects are also classed as transitive verbs. In this naming tradition, the terms intransitive, transitive and ditransitive are synonymous with oneplace, two-place and three-place verbs.

The fact that this terminological confusion can lead to misunderstandings between even established linguistics is shown by Culicover & Jackendoff's (2005: 59) criticism of Chomsky. Chomsky states that the combination of the English auxiliary *be* + verb with passive morphology can only be used for transitive verbs. Culicover and Jackendoff claim that this cannot be true because there are transitive verbs such as *weigh* and *cost*, which cannot undergo passivization:

	- b. \* Ten pounds are weighed / ten dollar are cost by this book.

Culicover & Jackendoff use *transitive* in the sense of a verb requiring two arguments. If we only view those verbs whose object becomes the subject of a passive clause as transitive, then *weigh* and *cost* no longer count as transitive verbs and Culicover and Jackendoff's criticism no longer holds.<sup>18</sup> That noun phrases such as those in (102) are no ordinary objects can also be seen by the fact they cannot be replaced by pronouns. It is therefore not possible to ascertain which case they bear since case distinctions are only realized on pronouns in English. However, if we translate the English examples into German, we see that the case is accusative:

(103) a. Das the Buch book kostet costs einen one.acc Dollar. dollar 'The book costs one dollar.'

<sup>18</sup>Their cricitism also turns out to be unjust even if one views transitives as being two-place predicates. If one claims that a verb must take at least two arguments to be able to undergo passivization, one is not necessarily claiming that all verbs taking two or more arguments have to allow passivization. The property of taking multiple arguments is a condition which must be fulfilled, but it is by no means the only one.

b. Das the Buch book wiegt weighs einen one.acc Zentner. centner 'The book weighs one centner.'

German is parallel to English in that that accusatives cannot be replaced by personal pronouns. They are counted among adverbial accusatives, which can be used to express measurements, weights, and durations (Duden 2005: §1246). This means that they are not objects that could be promoted to subjects in passivizations.

In the following, I will use *transitive* in the former sense, that is for verbs with an object that becomes the subject when passivized (e.g., with *werden* in German). When I talk about the class of verbs that includes *helfen* 'to help', which takes a nominative and dative argument, and *schlagen* 'to hit', which takes a nominative and accusative argument, I will use the term *two-place* or *bivalent verb*.

# **1.8 A topological model of the German clause**

In this section, I introduce the concept of so-called *topological fields* (*topologische Felder*). These will be used frequently in later chapters to discuss different parts of the German clause. One can find further, more detailed introductions to topology in Reis (1980), Höhle (1986) and Askedal (1986a,b). Wöllstein (2010) is a textbook about the topological field model.

# **1.8.1 The position of the verb**

It is common practice to divide German sentences into three types pertaining to the position of the finite verb:


The following examples illustrate these possibilities:

	- b. *Hat* has Peter Peter das the Eis ice.cream gegessen? eaten 'Has Peter eaten the ice cream?'
	- c. Peter Peter *hat* has das the Eis ice.cream gegessen. eaten 'Peter has eaten the ice cream.'

# **1.8.2 The sentence bracket, prefield, middle field and postfield**

We observe that the finite verb *hat* 'has' is only adjacent to its complement *gegessen* 'eaten' in (104a). In (104b) and (104c), the verb and its complement are separated, that is, discontinuous. We can then divide the German clause into various sub-parts on the basis of these distinctions. In (104b) and (104c), the verb and the auxiliary form a "bracket" around the clause. For this reason, we call this the *sentence bracket* (*Satzklammer*). The finite verbs in (104b) and (104c) form the left bracket and the non-finite verbs form the right bracket. Clauses with verb-final order are usually introduced by conjunctions such as *weil* 'because', *dass* 'that' and *ob* 'whether'. These conjunctions occupy the same position as the finite verb in verb-initial or verb-second clauses. We therefore also assume that these conjunctions form the left bracket in these cases. Using the notion of the sentence bracket, it is possible to divide the structure of the German clause into the prefield (*Vorfeld*), middle field (*Mittelfeld*) and postfield (*Nachfeld*). The prefield describes everything preceding the left sentence bracket, the middle field is the section between the left and right bracket and the postfield describes the position after the right bracket. The Tables 1.1 and 1.2 give some examples of this. The right bracket can contain multiple verbs


Table 1.1: Examples of how topological fields can be occupied in declarative main clauses

and is often referred to as a *verbal complex* or *verb cluster*. The assignment of question words and relative pronouns to the prefield will be discussed in the following section.



### **1.8.3 Assigning elements to fields**

As the examples in the Tables 1.1 and 1.2 show, it is not required that all fields are always occupied. Even the left bracket can be empty if one opts to leave out the copula *sein* 'to be' such as in the examples in (105):

(105) a. […] egal, regardless was what noch still passiert, happens der the Norddeutsche north.German Rundfunk broadcasting.company steht stands schon already jetzt now als as Gewinner winner fest.<sup>19</sup> part 'Regardless of what still may happen, the North German broadcasting company is already the winner.'

<sup>19</sup>Spiegel, 12/1999, p. 258.


The examples in (105) correspond to those with the copula in (106):

	- b. Interessant interesting ist is zu to erwähnen, mention dass that ihre her Seele soul völlig completely in in Ordnung order war. was 'It is interesting to note that her soul was completely fine.'
	- c. Ein an Treppenwitz afterwit der of.the Musikgeschichte music.history ist, is dass that die the Kollegen colleagues von of Rammstein Rammstein vor before fünf five Jahren years noch still im in Vorprogramm pre.programme von of Sandow Sandow spielten. played 'It is one of the little ironies of music history that five years ago their colleagues of Rammstein were still an opening act for Sandow.'

When fields are empty, it is sometimes not clear which fields are occupied by certain constituents. For the examples in (105), one would have to insert the copula to be able to ascertain that a single constituent is in the prefield and, furthermore, which fields are occupied by the other constituents.

In the following example taken from Paul (1919: 13), inserting the copula obtains a different result:

	- b. Ist is niemand nobody da? there 'Is nobody there?'

Here we are dealing with a question and *niemand* 'nobody' in (107a) should therefore not be analyzed as in the prefield but rather the middle field.

<sup>20</sup>Michail Bulgakow, *Der Meister und Margarita*. München: Deutscher Taschenbuch Verlag. 1997, p. 422. <sup>21</sup>Flüstern & Schweigen, taz, 12.07.1999, p. 14.

In (108), there are elements in the prefield, the left bracket and the middle field. The right bracket is empty.<sup>22</sup>

(108) Der the Delphin dolphin(m) gibt gives dem the Kind child(n) den the Ball, book.(m) das that.n er he.m kennt. knows 'He gives the book to the woman that he knows.'

How should we analyze relative clauses such as *die er kennt* 'that he knows'? Do they form part of the middle field or the postfield? This can be tested using a test developed by Bech (1955: 72) (*Rangprobe*): first, we modify the example in (108) so that it is in the perfect. Since non-finite verb forms occupy the right bracket, we can clearly see the border between the middle field and postfield. The examples in (109) show that the relative clause cannot occur in the middle field unless it is part of a complex constituent with the head noun *Frau* 'woman'.

	- b. \* Der the Delphin dolphin hat has [dem the Kind] child den the Ball, ball [das who er he kennt,] knows gegeben. given
	- c. Der the Delphin dolphin hat has [dem the Kind, child das who er he kennt,] knows den the Ball ball gegeben. given

This test does not help if the relative clause is realized together with its head noun at the end of the sentence as in (110):

(110) Er he gibt gives das the Buch book der the Frau, woman die that er he kennt. knows 'He gives the book to the woman that he knows.'

If we put the example in (110) in the perfect, then we observe that the lexical verb can occur before or after the relative clause:

	- b. Er he hat has das the Buch book [der the Frau, woman die that er he kennt,] knows gegeben. given

In (111a), the relative clause has been extraposed. In (111b) it forms part of the noun phrase *der Frau, die er kennt* 'the woman that he knows' and therefore occurs inside the NP in the middle field. It is therefore not possible to rely on this test for (110). We assume that the relative clause in (110) also belongs to the NP since this is the most simple structure. If

<sup>22</sup>The sentence requires emphasis on *der* 'the'. *der Frau, die er kennt* 'the woman' is contrasted with another woman or other women.

the relative clause were in the postfield, we would have to assume that it has undergone extraposition from its position inside the NP. That is, we would have to assume the NPstructure anyway and then extraposition in addition.

We have a similar problem with interrogative and relative pronouns. Depending on the author, these are assumed to be in the left bracket (Kathol 2001; Dürscheid 2003: 94–95; Eisenberg 2004: 403; Pafel 2011: 54, 57, 69–70) or the prefield (Eisenberg et al. 2005: §1345; Wöllstein 2010: 29–30, Section 3.1) or even in the middle field (Altmann & Hofman 2004: 75). In Standard German interrogative or relative clauses, both fields are never simultaneously occupied. For this reason, it is not immediately clear to which field an element belongs. Nevertheless, we can draw parallels to main clauses: the pronouns in interrogative and relative clauses can be contained inside complex phrases:

	- b. Ich möchte wissen, [mit wem] du gesprochen hast.
		- I want.to know with whom you spoken have

'I want to know who you spoke to.'

Normally, only individual words (conjunctions or verbs) can occupy the left bracket,<sup>23</sup> whereas words and phrases can appear in the prefield. It therefore makes sense to assume that interrogative and relative pronouns (and phrases containing them) also occur in this position.

Furthermore, it can be observed that the dependency between the elements in the *Vorfeld* of declarative clauses and the remaining sentence is of the same kind as the dependency between the phrase that contains the relative pronoun and the remaining sentence. For instance, *über dieses Thema* 'about this topic' in (113a) depends on *Vortrag* 'talk', which is deeply embedded in the sentence: *einen Vortrag* 'a talk' is an argument of *zu halten* 'to hold', which in turn is an argument of *gebeten* 'asked'.

	- b. das the Thema, topic über about das which ich I ihn him gebeten asked habe, have einen a Vortrag talk zu to halten hold 'the topic about which I asked him to give a talk'

The situation is similar in (113b): the relative phrase *über das* 'about which' is a dependent of *Vortrag* 'talk' which is realized far away from it. Thus, if the relative phrase is assigned to the *Vorfeld*, it is possible to say that such nonlocal frontings always target the *Vorfeld*.

<sup>23</sup>Coordination is an exception to this:

<sup>(</sup>i) Sie she [kennt knows und and liebt] loves diese this Schallplatte. record 'She knows and loves this record.'

Finally, the Duden grammar (Eisenberg et al. 2005: §1347) provides the following examples from non-standard German (mainly southern dialects):

	- you are the best singer who where I know 'You are the best singer whom I know.'

These examples of interrogative and relative clauses show that the left sentence bracket is filled with a conjunction (*dass* 'that' or *wo* 'where' in the respective dialects). So if one wants to have a model that treats Standard German and the dialectal forms uniformly, it is reasonable to assume that the relative phrases and interrogative phrases are located in the *Vorfeld*.

## **1.8.4 Recursion**

As already noted by Reis (1980: 82), when occupied by a complex constituent, the prefield can be subdivided into further fields including a postfield, for example. The constituents *für lange lange Zeit* 'for a long, long time' in (116b) and *daß du kommst* 'that you are coming' in (116d) are inside the prefield but occur to the right of the right bracket *verschüttet* 'buried' / *gewußt* 'knew', that is they are in the postfield of the prefield.

(116) a. Die the Möglichkeit, possibility etwas something zu to verändern, change ist is damit there.with verschüttet buried für for lange long lange long Zeit. time

'The possibility to change something will now be gone for a long, long time.'

b. [Verschüttet buried für for lange long lange long Zeit] time ist ist damit there.with die the Möglichkeit, possibility etwas something zu verändern.

to change


Like constituents in the prefield, elements in the middle field and postfield can also have an internal structure and be divided into subfields accordingly. For example, *daß* 'that' is the left bracket of the subordinate clause *daß du kommst* in (116c), whereas *du* 'you' occupies the middle field and *kommst* 'come' the right bracket.

# **Comprehension questions**

	- (117) a. he
		- b. Go!
		- c. quick
	- (118) Er he hilft helps den the kleinen small Kindern children in in der the Schule. school 'He helps small children at school.'

## **Exercises**

1. Identify the sentence brackets, prefield, middle field and postfield in the following sentences. Do the same for the embedded clauses!

der the

#### (119) a. Karl Karl isst. eats 'Karl is eating.' b. Der the Mann man liebt loves eine a Frau, woman den who Peter Peter kennt. knows 'The man who Peter knows loves a woman.' c. Der the Mann man liebt loves eine a Frau, woman die that Peter Peter kennt. knows 'The man loves a woman who Peter knows.' d. Die the Studenten students haben have behauptet, claimed nur only wegen because.of Hitze heat einzuschlafen. to.fall.asleep 'The students claimed that they were only falling asleep because of the heat.' e. Dass that Aicke Aicke nicht not kommt, comes ärgert annoys Conny. Conny '(The fact) that Aicke isn't coming annoys Conny.' f. Ein a Buch book lesen, read das that sie her nicht not fesselt, mesmerizes würde would sie she nie. never 'She would never read a book not mesmerizing her.'

## **Further reading**

Reis (1980) gives reasons for why field theory is important for the description of the position of constituents in German.

Höhle (1986) discusses fields to the left of the prefield, which are needed for left-dislocation structures such as with *der Mittwoch* in (120), *aber* in (121a) and *denn* in (121b):

(120) Der the Mittwoch, Wednesday der that passt fits mir me gut. good 'Wednesday, that suits me fine.'

	- b. Denn because dass that es it regnet, rains damit there.with rechnet reckons keiner. nobody 'Because no-one expects that it will rain.'

Höhle also discusses the historical development of field theory.

Osborne (2018b), working in the framework of Dependency Grammar, challenged the notion of constituent. Those interested in the discussion find some comments in Müller (2019b: Section 2).

# **2 Phrase structure grammar**

This chapter deals with phase structure grammars (PSGs), which play an important role in several of the theories we will encounter in later chapters.

# **2.1 Symbols and rewrite rules**

Words can be assigned to a particular part of speech on the basis of their inflectional properties and syntactic distribution. Thus, *weil* 'because' in (1) is a conjunction, whereas *das* 'the' and *dem* 'the' are articles and therefore classed as determiners. Furthermore, *Buch* 'book' and *Kind* 'child' are nouns and *gibt* 'gives' is a verb.

(1) weil because er he das the Buch book dem the Kind child gibt gives 'because he gives the child the book'

Using the constituency tests we introduced in Section 1.3, one can show that individual words as well as the strings *das Buch* 'the book' and *dem Kind* 'the child' form constituents. These get then assigned certain symbols. Since nouns form an important part of the phrases *das Buch* and *dem Kind*, these are referred to as *noun phrases* or NPs, for short. The pronoun *er* 'he' can occur in the same positions as full NPs and can therefore also be assigned to the category NP.

Phrase structure grammars come with rules specifying which symbols are assigned to certain kinds of words and how these are combined to create more complex units. A simple phrase structure grammar which can be used to analyze (1) is given in (2):1,<sup>2</sup>


We can therefore interpret a rule such as NP → Det N as meaning that a noun phrase, that is, something which is assigned the symbol NP, can consist of a determiner (Det) and a noun (N).

<sup>1</sup> I ignore the conjunction *weil* 'because' for now. Since the exact analysis of German verb-first and verbsecond clauses requires a number of additional assumptions, we will restrict ourselves to verb-final clauses in this chapter.

<sup>2</sup> The rule NP → er may seem odd. We could assume the rule PersPron → er instead but then would have to posit a further rule which would specify that personal pronouns can replace full NPs: NP → PersPron. The rule in (2) combines the two aforementioned rules and states that *er* 'he' can occur in positions where noun phrases can.

#### 2 Phrase structure grammar

We can analyze the sentence in (1) using the grammar in (2) in the following way: first, we take the first word in the sentence and check if there is a rule in which this word occurs on the right-hand side of the rule. If this is the case, then we replace the word with the symbol on the left-hand side of the rule. This happens in lines 2–4, 6–7 and 9 of the derivation in (3). For instance, in line 2 *er* is replaced by NP. If there are two or more symbols which occur together on the right-hand side of a rule, then all these words are replaced with the symbol on the left. This happens in lines 5, 8 and 10. For instance, in line 5 and 8, Det and N are rewritten as NP.


In (3), we began with a string of words and it was shown that we can derive the structure of a sentence by applying the rules of a given phrase structure grammar. We could have applied the same steps in reverse order: starting with the sentence symbol S, we would have applied the steps 9–1 and arrived at the string of words. Selecting different rules from the grammar for rewriting symbols, we could use the grammar in (2) to get from S to the string *er dem Kind das Buch gibt* 'he the child the book gives'. We can say that this grammar licenses (or generates) a set of sentences.

The derivation in (3) can also be represented as a tree. This is shown by Figure 2.1. The

Figure 2.1: Analysis of *er das Buch dem Kind gibt* 'he the book the child gives'

symbols in the tree are called *nodes*. We say that S immediately dominates the NP nodes and the V node. The other nodes in the tree are also dominated, but not immediately dominated, by S. If we want to talk about the relationship between nodes, it is common

to use kinship terms. In Figure 2.1, S is the *mother node* of the three NP nodes and the V node. The NP nodes and V are *sisters* or *daughters*, since they have the same mother node.<sup>3</sup> If a node has two daughters, then we have a binary branching structure. If there is exactly one daughter, then we have a unary branching structure. Two constituents are said to be *adjacent* if they are directly next to each other.

Phrase structure rules are often omitted in linguistic publications. Instead, authors opt for tree diagrams or the compact equivalent bracket notation such as (4).

(4) [<sup>S</sup> [NP er] he [NP [Det das] the [<sup>N</sup> Buch]] book [NP [Det dem] the [<sup>N</sup> Kind]] child [<sup>V</sup> gibt]] gives

Nevertheless, it is the grammatical rules which are actually important since these represent grammatical knowledge which is independent of specific structures. In this way, we can use the grammar in (2) to parse or generate the sentence in (5), which differs from (1) in the order of objects:

(5) [weil] because er he.nom dem the.dat Kind child das the.acc Buch book gibt gives 'because he gives the child the book'

The rules for replacing determiners and nouns are simply applied in a different order than in (1). Rather than replacing the first Det with *das* 'the' and the first noun with *Buch* 'book', the first Det is replaced with *dem* 'the' and the first noun with *Kind*.

At this juncture, I should point out that the grammar in (2) is not the only possible grammar for the example sentence in (1). There is an infinite number of possible grammars which could be used to analyze these kinds of sentences (see exercise 1). Another possible grammar is given in (6):


This grammar licenses binary branching structures as shown in Figure 2.2 on the following page.

Both the grammar in (6) and (2) are too imprecise. If we adopt additional lexical entries for *ich* 'I' and *den* 'the' (accusative) in our grammar, then we would incorrectly license the ungrammatical sentences in (7b–d):<sup>4</sup>

(i) a. \* der the.nom Delphin dolphin erwartet expects b. \* des the.gen Kindes child.gen der the.nom Delphin dolphin den the.acc Ball ball dem the.dat Kind child gibt gives

<sup>3</sup> *Parent node* and *child node* are alternative terms. I use *mother* and *daughter* here, since this terminology is also used in formalizations of some of the theories discussed later.

<sup>4</sup>With the grammar in (6), we also have the additional problem that we cannot determine when an utterance is complete since the symbol V is used for all combinations of V and NP. Therefore, we can also analyze the sentence in (i) with this grammar:

The number of arguments required by a verb must be somehow represented in the grammar. In the following chapters, we will see exactly how the selection of arguments by a verb (valence) can be captured in various grammatical theories.

2 Phrase structure grammar

Figure 2.2: Analysis of *er das Buch dem Kind gibt* with a binary branching structure

	- b. \* ich I.nom das the.acc Buch book dem the.dat Kind child gibt gives
	- c. \* er he.nom das the.acc Buch book den the.acc Kind child gibt gives
	- d. \* er he.nom den the.m Buch book(n) dem the Kind child gibt gives

In (7b), subject-verb agreement has been violated, in other words: *ich* 'I' and *gibt* 'gives' do not fit together. (7c) is ungrammatical because the case requirements of the verb have not been satisfied: *gibt* 'gives' requires a dative object. Finally, (7d) is ungrammatical because there is a lack of agreement between the determiner and the noun. It is not possible to combine *den* 'the', which is masculine and bears accusative case, and *Buch* 'book' because *Buch* is neuter gender. For this reason, the gender properties of these two elements are not the same and the elements can therefore not be combined.

In the following, we will consider how we would have to change our grammar to stop it from licensing the sentences in (7b–d). If we want to capture subject-verb agreement, then we have to cover the following six cases in German, as the verb has to agree with the subject in both person (1, 2, 3) and number (sg, pl):

(8) a. Ich I schlafe. sleep (1, sg)


It is possible to capture these relations with grammatical rules by increasing the number of symbols we use. Instead of the rule S → NP NP NP V, we can use the following:

(9) S → NP\_1\_sg NP NP V\_1\_sg S → NP\_2\_sg NP NP V\_2\_sg S → NP\_3\_sg NP NP V\_3\_sg S → NP\_1\_pl NP NP V\_1\_pl S → NP\_2\_pl NP NP V\_2\_pl S → NP\_3\_pl NP NP V\_3\_pl

This would mean that we need six different symbols for noun phrases and verbs respectively, as well as six rules rather than one.

In order to account for case assignment by the verb, we can incorporate case information into the symbols in an analogous way. We would then get rules such as the following:

(10) S → NP\_1\_sg\_nom NP\_dat NP\_acc V\_1\_sg\_nom\_dat\_acc S → NP\_2\_sg\_nom NP\_dat NP\_acc V\_2\_sg\_nom\_dat\_acc S → NP\_3\_sg\_nom NP\_dat NP\_acc V\_3\_sg\_nom\_dat\_acc S → NP\_1\_pl\_nom NP\_dat NP\_acc V\_1\_pl\_nom\_dat\_acc S → NP\_2\_pl\_nom NP\_dat NP\_acc V\_2\_pl\_nom\_dat\_acc S → NP\_3\_pl\_nom NP\_dat NP\_acc V\_3\_pl\_nom\_dat\_acc

Since it is necessary to differentiate between noun phrases in four cases, we have a total of six symbols for NPs in the nominative and three symbols for NPs with other cases. Since verbs have to match the NPs, that is, we have to differentiate between verbs which select three arguments and those selecting only one or two (11), we have to increase the number of symbols we assume for verbs.

	- b. \* Aicke Aicke schläft sleeps das the Buch. book

In the rules above, the information about the number of arguments required by a verb is included in the marking 'nom\_dat\_acc'.

In order to capture the determiner-noun agreement in (12), we have to incorporate information about gender (fem, mas, neu), number (sg, pl), case (nom, gen, dat, acc) and the inflectional classes (strong, weak)<sup>5</sup> .


Instead of the rule NP → Det N, we will have to use rules such as those in (13):<sup>6</sup>

(13) NP\_3\_sg\_nom → Det\_fem\_sg\_nom N\_fem\_sg\_nom NP\_3\_sg\_nom → Det\_mas\_sg\_nom N\_mas\_sg\_nom NP\_3\_sg\_nom → Det\_neu\_sg\_nom N\_neu\_sg\_nom NP\_3\_pl\_nom → Det\_fem\_pl\_nom N\_fem\_pl\_nom NP\_3\_pl\_nom → Det\_mas\_pl\_nom N\_mas\_pl\_nom NP\_3\_pl\_nom → Det\_neu\_pl\_nom N\_neu\_pl\_nom NP\_3\_sg\_nom → Det\_fem\_sg\_nom N\_fem\_sg\_nom NP\_3\_sg\_nom → Det\_mas\_sg\_nom N\_mas\_sg\_nom NP\_3\_sg\_nom → Det\_neu\_sg\_nom N\_neu\_sg\_nom NP\_3\_pl\_nom → Det\_fem\_pl\_nom N\_fem\_pl\_nom NP\_3\_pl\_nom → Det\_mas\_pl\_nom N\_mas\_pl\_nom NP\_3\_pl\_nom → Det\_neu\_pl\_nom N\_neu\_pl\_nom

(13) shows the rules for nominative noun phrases. We would need analogous rules for genitive, dative, and accusative. We would then require 24 symbols for determiners (3 · 2 · 4), 24 symbols for nouns and 24 rules rather than one. If inflection class is taken into account, the number of symbols and the number of rules doubles.

<sup>5</sup> These are inflectional classes for adjectives which are also relevant for some nouns such as *Beamter* 'civil servant', *Verwandter* 'relative', *Gesandter* 'envoy'. For more on adjective classes see page 21. 6

To keep things simple, these rules do not incorporate information regarding the inflection class.

# **2.2 Expanding PSG with features**

Phrase structure grammars which only use atomic symbols are problematic as they cannot capture certain generalizations. We as linguists can recognize that NP\_3\_sg\_nom stands for a noun phrase because it contains the letters NP. However, in formal terms this symbol is just like any other symbol in the grammar and we cannot capture the commonalities of all the symbols used for NPs. Furthermore, unstructured symbols do not capture the fact that the rules in (13) all have something in common. In formal terms, the only thing that the rules have in common is that there is one symbol on the left-hand side of the rule and two on the right.

We can solve this problem by introducing features which are assigned to category symbols and therefore allow for the values of such features to be included in our rules. For example, we can assume the features person, number and case for the category symbol NP. For determiners and nouns, we would adopt an additional feature for gender and one for inflectional class. (14) shows two rules augmented by the respective values in brackets:<sup>7</sup>

(14) NP(3,sg,nom) → Det(fem,sg,nom) N(fem,sg,nom) NP(3,sg,nom) → Det(mas,sg,nom) N(mas,sg,nom)

If we were to use variables rather than the values in (14), we would get rule schemata as the one in (15):

(15) NP(3,Num,Case) → Det(Gen,Num,Case) N(Gen,Num,Case)

The values of the variables here are not important. What is important is that they match. For this to work, it is important that the values are ordered; that is, in the category of a determiner, the gender is always first, number second and so on. The value of the person feature (the first position in the NP(3,Num,Case)) is fixed at '3' by the rule. These kinds of restrictions on the values can, of course, be determined in the lexicon:

(16) NP(3,sg,nom) → es Det(mas,sg,nom) → des

The rules in (10) can be collapsed into a single schema as in (17):

(17) S → NP(Per1,Num1,nom) NP(Per2,Num2,dat) NP(Per3,Num3,acc) V(Per1,Num1,ditransitive)

The identification of Per1 and Num1 on the verb and on the subject ensures that there is subject-verb agreement. For the other NPs, the values of these features are irrelevant. The case of these NPs is explicitly determined.

<sup>7</sup>Chapter 6 introduces attribute value structures. In these structures we always have pairs of a feature name and a feature value. In such a setting, the order of values is not important, since every value is uniquely identified by the corresponding feature name. Since we do not have a feature name in schemata like (13), the order of the values is important.

# **2.3 Semantics**

In the introductory chapter and the previous sections, we have been dealing with syntactic aspects of language and the focus will remain very much on syntax for the remainder of this book. It is, however, important to remember that we use language to communicate, that is, to transfer information about certain situations, topics or opinions. If we want to accurately explain our capacity for language, then we also have to explain the meanings that our utterances have. To this end, it is necessary to understand their syntactic structure, but this alone is not enough. Furthermore, theories of language acquisition that only concern themselves with the acquisition of syntactic constructions are also inadequate. The syntax-semantics interface is therefore important and every grammatical theory has to say something about how syntax and semantics interact. In the following, I will show how we can combine phrase structure rules with semantic information. To represent meanings, I will use first-order predicate logic and -calculus. Unfortunately, it is not possible to provide a detailed discussion of the basics of logic so that even readers without prior knowledge can follow all the details, but the simple examples discussed here should be enough to provide some initial insights into how syntax and semantics interact and furthermore, how we can develop a linguistic theory to account for this.

To show how the meaning of a sentence is derived from the meaning of its parts, we will consider (18a). We assign the meaning in (18b) to the sentence in (18a).

(18) a. Max Max schläft. sleeps 'Max is sleeping.' b. *schlafen*′ (*max*′ )

Here, we are assuming *schlafen*′ to be the meaning of *schläft* 'sleeps'. We use prime symbols to indicate that we are dealing with word meanings and not actual words. At first glance, it may not seem that we have really gained anything by using *schlafen*′ to represent the meaning of (18a), since it is just another form of the verb *schläft* 'sleeps'. It is, however, important to concentrate on a single verb form as inflection is irrelevant when it comes to meaning. We can see this by comparing the examples in (19a) and (19b):

	- b. Alle all Jungen boys schlafen. sleep 'All boys sleep.'

To enhance readability I use English translations of the predicates in semantic representations from now on.<sup>8</sup> So the meaning of (18a) is represented as (20) rather then (18b):

<sup>8</sup>Note that I do not claim that English is suited as representation language for semantic relations and concepts that can be expressed in other languages.

#### (20) *sleep*′ (*max*′ )

When looking at the meaning in (20), we can consider which part of the meaning comes from each word. It seems relatively intuitive that *max*′ comes from *Max*, but the trickier question is what exactly *schläft* 'sleeps' contributes in terms of meaning. If we think about what characterizes a 'sleeping' event, we know that there is typically an individual who is sleeping. This information is part of the meaning of the verb *schlafen* 'to sleep'. The verb meaning does not contain information about the sleeping individual, however, as this verb can be used with various subjects:

(21) a. Paul Paul schläft. sleeps 'Paul is sleeping.' b. Mio schläft.


We can therefore abstract away from any specific use of *sleep*′ and instead of, for example, *max*′ in (20), we use a variable (e.g., ). This can then be replaced by *paul*′ , *mio*′ or *xaver*′ in a given sentence. To allow us to access these variables in a given meaning, we can write them with a in front. Accordingly, *schläft* 'sleeps' will have the following meaning:

(22) *sleep*′ ()

The step from (20) to (22) is referred to as *lambda abstraction*. The combination of the expression (22) with the meaning of its arguments happens in the following way: we remove the and the corresponding variable and then replace all instances of the variable with the meaning of the argument. If we combine (22) and *max*′ as in (23), we arrive at the meaning in (20), namely *sleep*′ (*max*′ ).

(23) *sleep*′ () *max*′

The process is called -reduction or -conversion. To show this further, let us consider an example with a transitive verb. The sentence in (24a) has the meaning given in (24b):

(24) a. Max Max mag likes Lotte. Lotte 'Max likes Lotte.' b. *like*′ (*max*′ , *lotte*′ )

The -abstraction of *mag* 'likes' is shown in (25):

(25) *like*′ (, )

#### 2 Phrase structure grammar

Note that it is always the first that has to be used first. The variable corresponds to the object of *mögen* 'to like'. For languages like English it is assumed that the object forms a verb phrase (VP) together with the verb and this VP is combined with the subject. German differs from English in allowing more freedom in constituent order. The problems that result for form meaning mappings are solved in different ways by different theories. The respective solutions will be addressed in the following chapters.

If we combine the representation in (25) with that of the object *Lotte*, we arrive at (26a), and following -reduction, (26b):

(26) a. *like*′ (, )*lotte*′ b. *like*′ (, *lotte*′ )

This meaning can in turn be combined with the subject and we then get (27a) and (27b) after -reduction:

(27) a. *like*′ (, *lotte*′ )*max*′ b. *like*′ (*max*′ , *lotte*′ )

After introducing lambda calculus, integrating the composition of meaning into our phrase structure rules is simple. A rule for the combination of a verb with its subject has to be expanded to include positions for the semantic contribution of the verb, the semantic contribution of the subject and then the meaning of the combination of these two (the entire sentence). The complete meaning is the combination of the individual meanings in the correct order. We can therefore take the simple rule in (28a) and turn it into (28b):

(28) a. S → NP(nom) V b. S(V′ NP′ ) → NP(nom, NP′ ) V(V′ )

V ′ stands for the meaning of V and NP′ for the meaning of the NP(nom). V′ NP′ stands for the combination of V′ and NP′ . When analyzing (18a), the meaning of V′ is *sleep*′ () and the meaning of NP′ is *max*′ . The combination of V′ NP′ corresponds to (29a) or after -reduction to (18b) – repeated here as (29b):

(29) a. *sleep*′ ()*max*′ b. *sleep*′ (*max*′ )

For the example with a transitive verb in (24a), the rule in (30) can be proposed:

(30) S(V′ NP2′ NP1′ ) → NP(nom, NP1′ ) V(V′ ) NP(acc, NP2′ )

The meaning of the verb (V′ ) is first combined with the meaning of the object (NP2′ ) and then with the meaning of the subject (NP1′ ).

At this point, we can see that there are several distinct semantic rules for the phrase structure rules above. The hypothesis that we should analyze language in this way is called the *rule-to-rule hypothesis* (Bach 1976: 184). A more general process for deriving the meaning of linguistic expression will be presented in Section 5.1.4.

# **2.4 Phrase structure rules for some aspects of German syntax**

Whereas determining the direct constituents of a sentence is relative easy, since we can very much rely on the movement test due to the somewhat flexible order of constituents in German, it is more difficult to identify the parts of the noun phrase. This is the problem we will focus on in this section. To help motivate assumptions about X syntax to be discussed in Section 2.5, we will also discuss prepositional phrases.

## **2.4.1 Noun phrases**

Up to now, we have assumed a relatively simple structure for noun phrases: our rules state that a noun phrase consists of a determiner and a noun. Noun phrases can have a distinctly more complex structure than (31a). This is shown by the following examples in (31):

	- a book
	- b. ein a Buch, book das that wir we kennen know
	- c. ein a Buch book aus from Japan Japan
	- d. ein an interessantes interesting Buch book
	- e. ein a Buch book aus from Japan, Japan das that wir we kennen know
	- f. ein an interessantes interesting Buch book aus from Japan Japan
	- g. ein an interessantes interesting Buch, book das that wir we kennen know
	- h. ein an interessantes interesting Buch book aus from Japan, Japan das that wir we kennen know

As well as determiners and nouns, noun phrases can also contain adjectives, prepositional phrases and relative clauses. The additional elements in (31) are adjuncts. They restrict the set of objects which the noun phrase refers to. Whereas (31a) refers to a being which has the property of being a book, the referent of (31b) must also have the property of being known to us.

Our previous rule for noun phrases simply combines a noun and a determiner and can therefore only be used to analyze (31a). The question we are facing now is how we can modify this rule or which additional rules we would have to assume in order to analyze the other noun phrases in (31). In addition to rule (32a), one could propose a rule such as the one in (32b).9,<sup>10</sup>

(32) a. NP → Det N b. NP → Det A N

However, this rule would still not allow us to analyze noun phrases such as (33):

(33) alle all weiteren further schlagkräftigen strong Argumente arguments 'all other strong arguments'

In order to be able to analyze (33), we require a rule such as (34):

(34) NP → Det A A N

It is always possible to increase the number of adjectives in a noun phrase and setting an upper limit for adjectives would be entirely arbitrary. Even if we opt for the following abbreviation, there are still problems:

(35) NP → Det A\* N

The asterisk in (35) stands for any number of iterations. Therefore, (35) encompasses rules with no adjectives as well as those with one, two or more.

The problem is that according to the rule in (35) adjectives and nouns do not form a constituent and we can therefore not explain why coordination is still possible in (36):

(36) alle all [[großen big Seeelefanten] elephant.seals und and [grauen grey Eichhörnchen]] squirrels 'all big elephant seals and grey squirrels'

If we assume that coordination involves the combination of two or more word strings with the same syntactic properties, then we would have to assume that the adjective and noun form a unit.

The following rules capture the noun phrases with adjectives discussed thus far:

(37) a. NP → Det N b. N → A N c. N → N

These rules state the following: a noun phrase consists of a determiner and a nominal element (N). This nominal element can consist of an adjective and a nominal element (37b), or just a noun (37c). Since N is also on the right-hand side of the rule in (37b), we can apply this rule multiple times and therefore account for noun phrases with multiple adjectives such as (33). Figure 2.3 on the next page shows the structure of a noun phrase without an adjective and that of a noun phrase with one or two adjectives. The adjective

<sup>9</sup> See Eisenberg (2004: 238) for the assumption of flat structures in noun phrases.

<sup>10</sup>There are, of course, other features such as gender and number, which should be part of all the rules discussed in this section. I have omitted these in the following for ease of exposition.

Figure 2.3: Noun phrases with differing numbers of adjectives

*grau* 'grey' restricts the set of referents for the noun phrase. If we assume an additional adjective such as *groß* 'big', then it only refers to those squirrels who are grey as well as big. These kinds of noun phrases can be used in contexts such as the following:

(38) A: Alle all grauen grey Eichhörnchen squirrels sind are groß. big 'All grey squirrels are big.' B: Nein, no ich I habe have ein a kleines small graues grey Eichhörnchen squirrel gesehen. seen 'No, I saw a small grey squirrel.'

We observe that this discourse can be continued with *Aber alle kleinen grauen Eichhörnchen sind krank* 'but all small grey squirrels are ill' and a corresponding answer. The possibility to have even more adjectives in noun phrases such as *ein kleines graues Eichhörnchen* 'a small grey squirrel' is accounted for in our rule system in (37). In the rule (37b), N occurs on the left as well as the right-hand side of the rule. This kind of rule is referred to as *recursive*.

We have now developed a nifty little grammar that can be used to analyze noun phrases containing adjectival modifiers. As a result, the combination of an adjective and noun is given constituent status. One may wonder at this point if it would not make sense to also assume that determiners and adjectives form a constituent, as we also have the following kind of noun phrases:

(39) diese these schlauen smart und and diese these neugierigen curious Eichhörnchen squirrels

Here, we are dealing with a different structure, however. Two full NPs have been conjoined and part of the first conjunct has been deleted.

#### 2 Phrase structure grammar

(40) diese these schlauen smart Eichhörnchen squirrels und and diese these neugierigen curious Eichhörnchen squirrels

One can find similar phenomena at the sentence and even word level:

(41) a. dass that Peter Peter dem the Kind child das the Buch book gibt gives und and Maria Maria der the Frau woman die the Schallplatte record gibt gives

'that Peters gives the book to the child and Maria the record to the woman'

b. beprfx und and ent-laden prfx-load 'load and unload'

Thus far, we have discussed how we can ideally integrate adjectives into our rules for the structure of noun phrases. Other adjuncts such as prepositional phrases or relative clauses can be combined with N in an analogous way to adjectives:

$$\text{(42)}\quad\text{a.}\quad\overline{\text{N}}\to\overline{\text{N}}\text{PP}$$

b. N → N relative clause

With these rules and those in (37), it is possible – assuming the corresponding rules for PPs and relative clauses – to analyze all the examples in (31).

(37c) states that it is possible for N to consist of a single noun. A further important rule has not yet been discussed: we need another rule to combine nouns such as *Vater* 'father', *Sohn* 'son' or *Bild* 'picture', so-called *relational nouns*, with their arguments. Examples of these can be found in (43a–b). (43c) is an example of a nominalization of a verb with its argument:

	- b. das the Bild picture vom of.the Gleimtunnel Gleimtunnel 'the picture of the Gleimtunnel'
	- c. das the Kommen coming der of.the Installateurin plumber 'the plumber's visit'

The rule that we need to analyze (43a,b) is given in (44):

(44) N → N PP

Figure 2.4 shows two structures with PP-arguments.<sup>11</sup> The tree on the right also contains an additional PP-adjunct, which is licensed by the rule in (42a).

<sup>11</sup>The triangles are abbreviations for fully specified structures. They are used for substructures irrelevant for the current topic.

Figure 2.4: Combination of a noun with PP complement *vom Gleimtunnel* to the right with an adjunct PP

In addition to the previously discussed NP structures, there are other structures where the determiner or noun is missing. Nouns can be omitted via ellipsis. (45) gives an example of noun phrases, where a noun that does not require a complement has been omitted. The examples in (46) show NPs in which only one determiner and complement of the noun has been realized, but not the noun itself. The underscore marks the position where the noun would normally occur.

	- b. ein a neues new interessantes interesting \_ 'a new interesting one'
	- c. ein an interessantes interesting \_ aus from Japan Japan 'an interesting one from Japan'
	- d. ein an interessantes interesting \_, das that wir we kennen know 'an interesting one that we know'
	- b. (Nein, no nicht not das the Bild picture von of der the Stadtautobahn), motorway das the \_ vom of.the Gleimtunnel Gleimtunnel war was

beeindruckend.

impressive

'No, it wasn't the picture of the motorway, but rather the one of the Gleimtunnel that was impressive.'

c. (Nein, no nicht not das the Kommen coming des of.the Tischlers), carpenter das the \_ der of.the Installateurin plumber ist is wichtig.

important

'No, it isn't the visit of the carpenter, but rather the visit of the plumber that is important.'

In English, the pronoun *one* must often be used in the corresponding position,<sup>12</sup> but in German the noun is simply omitted. In phrase structure grammars, this can be described by a so-called *epsilon production*. These rules replace a symbol with nothing (47a). The rule in (47b) is an equivalent variant which is responsible for the term *epsilon production*:

(47) a. N →

b. N →

The corresponding trees are shown in Figure 2.5. Going back to boxes, the rules in (47)

Figure 2.5: Noun phrases without an overt head

correspond to empty boxes with the same labels as the boxes of ordinary nouns. As we have considered previously, the actual content of the boxes is unimportant when considering the question of where we can incorporate them. For example, the noun phrases in (31) can occur in the same sentences. Similarly, the empty noun box behaves like one with a genuine noun: if we do not open the empty box, we will not be able to notice the difference to a filled box.

<sup>12</sup>See Fillmore et al. (2012: Section 4.12) for English examples without the pronoun *one*.

It is not only possible to omit the noun from noun phrases, but the determiner can also remain unrealized in certain contexts. (48) shows noun phrases in plural:

	- b. Bücher, books die that wir we kennen know
	- c. interessante interesting Bücher books
	- d. interessante interesting Bücher, books die that wir we kennen know

The determiner can also be omitted in singular if the noun denotes a mass noun:

(49) a. Getreide

grain


Finally, both the determiner and the noun can be omitted:

	- b. Dort there drüben over steht stands frisches, fresh das that gerade just gemahlen ground wurde. aux 'Over there is some fresh (grain) that has just been ground.'

Figure 2.6 on the next page shows the corresponding trees.

It is necessary to add two further comments to the rules that were developed up to this point: up to now, I have always spoken of adjectives. However, it is possible to have very complex adjective phrases in pre-nominal position. These can be adjectives with complements (51a,b) or adjectival participles (51c,d):

(51) a. der the seiner his.dat Frau wife treue faithful Mann man 'the man faithful to his wife'

Figure 2.6: Noun phrases without overt determiner


Taking this into account, the rule (37b) has to be modified in the following way:

(52) N → AP N

An adjective phrase (AP) can consist of an NP and an adjective, a PP and an adjective or just an adjective:

(53) a. AP → NP A b. AP → PP A c. AP → A

There are two imperfections resulting from the rules that were developed thus far. These are the rules for adjectives or nouns without complements in (53c) as well as (37c) – repeated here as (54):

$$\text{(54)}\quad \overline{\text{N}} \to \text{N}$$

If we apply these rules, then we will generate unary branching subtrees, that is, trees with a mother that only has one daughter. See Figure 2.6 for an example of this. If we maintain the parallel to the boxes, this would mean that there is a box which contains another box which is the one with the relevant content.

In principle, nothing stops us from placing this information directly into the larger box. Instead of the rules in (55), we will simply use the rules in (56):


(56a) states that *kluge* 'smart' has the same properties as a full adjective phrase, in particular that it cannot be combined with a complement. This is parallel to the categorization of the pronoun *er* 'he' as an NP in the grammars (2) and (6).

Assigning N to nouns which do not require a complement has the advantage that we do not have to explain why the analysis in (57b) is possible as well as (57a) despite there not being any difference in meaning.


In (57a), two nouns have projected to N and have then been joined by coordination. The result of coordination of two constituents of the same category is always a new constituent with that category. In the case of (57a), this is also N. This constituent is then combined with the adjective and the determiner. In (57b), the nouns themselves have been coordinated. The result of this is always another constituent which has the same category as its parts. In this case, this would be N. This N becomes N and is then combined with the adjective. If nouns which do not require complements were categorized as N rather than N, we would not have the problem of spurious ambiguities. The structure in (58) shows the only possible analysis.

(58) [NP einige some [N kluge smart [N [N Frauen women ] und and [<sup>N</sup> Männer men ]]]]

## **2.4.2 Prepositional phrases**

Compared to the syntax of noun phrases, the syntax of prepositional phrases (PPs) is relatively straightforward. PPs normally consist of a preposition and a noun phrase whose case is determined by that preposition. We can capture this with the following rule:

(59) PP → P NP

This rule must, of course, also contain information about the case of the NP. I have omitted this for ease of exposition as I did with the NP-rules and AP-rules above.

#### 2 Phrase structure grammar

The Duden grammar (Eisenberg et al. 2005: §1300) offers examples such as those in (60), which show that certain prepositional phrases serve to further define the semantic contribution of the preposition by indicating some measurement, for example:

	- b. [[Kurz] shortly nach after dem the Start] take.off fiel fell die the Klimaanlage air.conditioning aus. out 'Shortly after take off, the air conditioning stopped working.'
	- c. [[Schräg] diagonally hinter behind der the Scheune] barn ist is ein a Weiher. pond 'There is a pond diagonally across from the barn.'
	- d. [[Mitten] middle im in.the Urwald] jungle stießen stumbled die the Forscher researchers auf on einen an alten old Tempel. temple 'In the middle of the jungle, the researches came across an old temple.'

To analyze the sentences in (60a,b), one could propose the following rules in (61):

(61) a. PP → NP PP b. PP → AP PP

These rules combine a PP with an indication of measurement. The resulting constituent is another PP. It is possible to use these rules to analyze prepositional phrases in (60a,b), but it unfortunately also allows us to analyze those in (62):

$$\begin{array}{ccccc} \text{(62)} & \text{a. } ^\ast \text{[}\_{\text{PP}} \text{ einen Schritt [}\_{\text{PP}} \text{ kurz} \quad \text{[}\_{\text{PP}} \text{ vor} \quad \text{dem Abgrund]}] \text{[} \text{]} \\ & \text{one} \quad \text{step} \qquad \text{shortly} \qquad \text{before the absys} \\ \text{b. } ^\ast \text{[}\_{\text{PP}} \text{ kurz} \quad \text{[}\_{\text{PP}} \text{ einen Schritt [}\_{\text{PP}} \text{ vor} \quad \text{dem Abgrund]}] \text{[} \text{]} \\ & \text{shortly} \qquad \text{one} \quad \text{step} \qquad \text{before the absys} \end{array}$$

Both rules in (61) were used to analyze the examples in (62). Since the symbol PP occurs on both the left and right-hand side of the rules, we can apply the rules in any order and as many times as we like.

We can avoid this undesired side-effect by reformulating the previously assumed rules:

$$\begin{aligned} \text{(63)} \quad \text{a. } \text{PP} &\to \text{NP} \,\, \overline{\text{P}}\\ \text{b. } \text{PP} &\to \text{AP} \,\, \overline{\text{P}}\\ \text{c. } \text{PP} &\to \overline{\text{P}}\\ \text{d. } \,\, \overline{\text{P}} &\to \text{P NP} \end{aligned}$$

Rule (59) becomes (63d). The rule in (63c) states that a PP can consist of P. Figure 2.7 on the facing page shows the analysis of (64) using (63c) and (63d) as well as the analysis of an example with an adjective in the first position following the rules in (63b) and (63d):

(64) vor before dem the Abgrund abyss 'in front of the abyss'

Figure 2.7: Prepositional phrases with and without measurement

At this point, the attentive reader is probably wondering why there is no empty measurement phrase in the left figure of Figure 2.7, which one might expect in analogy to the empty determiner in Figure 2.6. The reason for the empty determiner in Figure 2.6 is that the entire noun phrase without the determiner has a meaning similar to those with a determiner. The meaning normally contributed by the visible determiner has to somehow be incorporated in the structure of the noun phrase. If we did not place this meaning in the empty determiner, this would lead to more complicated assumptions about semantic combination: we only really require the mechanisms presented in Section 2.3 and these are very general in nature. The meaning is contributed by the words themselves and not by any rules. If we were to assume a unary branching rule such as that in the left tree in Figure 2.7 instead of the empty determiner, then this unary branching rule would have to provide the semantics of the determiner. This kind of analysis has also been proposed by some researchers. See Chapter 19 for more on empty elements.

Unlike determiner-less NPs, prepositional phrases without an indication of degree or measurement do not lack any meaning component for composition. It is therefore not necessary to assume an empty indication of measurement, which somehow contributes to the meaning of the entire PP. Hence, the rule in (63c) states that a prepositional phrase consists of P, that is, a combination of P and NP.

# **2.5 X theory**

If we look again at the rules formulated in the previous section, we see that heads are always combined with their complements to form a new constituent (65a,b), which can then be combined with further constituents (65c,d):

$$\begin{array}{c} \text{(65)} \quad \text{a. } \overline{\text{N}} \to \text{N PP} \\ \text{b. } \overline{\text{P}} \to \text{P NP} \end{array}$$

#### 2 Phrase structure grammar


Grammarians working on English noticed that parallel structures can be used for phrases which have adjectives or verbs as their head. I discuss adjective phrases at this point and postpone the discussion of verb phrases to Chapter 3. As in German, certain adjectives in English can take complements with the important restriction that adjective phrases with complements cannot realize these pre-nominally in English. (66) gives some examples of adjective phrases:

	- b. Kim and Sandy are very proud.
	- c. Kim and Sandy are proud of their child.
	- d. Kim and Sandy are very proud of their child.

Unlike prepositional phrases, complements of adjectives are normally optional. *proud* can be used with or without a PP. The degree expression *very* is also optional.

The rules which we need for this analysis are given in (67), with the corresponding structures in Figure 2.8.

$$\begin{aligned} \text{(67)} \quad & \text{a. } \text{AP} \to \overline{\text{A}}\\ & \text{b. } \text{AP} \to \text{AdvP} \, \overline{\text{A}}\\ & \text{c. } \overline{\text{A}} \to \text{A PP} \\ & \text{d. } \overline{\text{A}} \to \text{A} \end{aligned}$$

Figure 2.8: English adjective phrases

As was shown in Section 2.2, it is possible to generalize over very specific phrase structure rules and thereby arrive at more general rules. In this way, properties such as person, number and gender are no longer encoded in the category symbols, but rather only simple symbols such as NP, Det and N are used. It is only necessary to specify something about the values of a feature if it is relevant in the context of a given rule. We can take this abstraction a step further: instead of using explicit category symbols such as N, V, P and A for lexical categories and NP, VP, PP and AP for phrasal categories, one can simply use a variable for the word class in question and speak of X and XP.

This form of abstraction can be found in so-called X theory (or X-bar theory, the term *bar* refers to the line above the symbol), which was developed by Chomsky (1970) and refined by Jackendoff (1977). This form of abstract rules plays an important role in many different theories. For example: Government & Binding (Chapter 3), Generalized Phrase Structure Grammar (Chapter 5) and Lexical Functional Grammar (Chapter 7). In HPSG (Chapter 9), X theory also plays a role, but not all restrictions of the X schema have been adopted.

(68) shows a possible instantiation of X rules, where the category X has been used in place of N, as well as examples of word strings which can be derived by these rules:


Any word class can replace X (e.g., V, A or P). The X without the bar stands for a lexical item in the above rules. If one wants to make the bar level explicit, then it is possible to write X<sup>0</sup> . Just as with the rule in (15), where we did not specify the case value of the determiner or the noun but rather simply required that the values on the righthand side of the rule match, the rules in (68) require that the word class of an element on the right-hand side of the rule (X or X) matches that of the element on the left-hand side of the rule (X or X).

A lexical element can be combined with all its complements. The '\*' in the last rule stands for an unlimited amount of repetitions of the symbol it follows. A special case is zero-fold occurrence of complements. There is no PP complement of *Bild* 'picture' present in *das Bild* 'the picture' and thus N becomes N. The result of the combination of a lexical element with its complements is a new projection level of X: the projection level 1, which is marked by a bar. X can then be combined with adjuncts. These can occur to the left or right of X. The result of this combination is still X, that is, the projection level is not changed by combining it with an adjunct. Maximal projections are marked by two bars. One can also write XP for a projection of X with two bars. An XP consists of a specifier and X. Depending on one's theoretical assumptions, subjects of sentences (Haider 1995, 1997a; Berman 2003a: Section 3.2.2) and determiners in NPs (Chomsky 1970: 210) are specifiers. Furthermore, degree modifiers (Chomsky 1970: 210) in adjective phrases and measurement indicators in prepositional phrases are also counted as specifiers.

Non-head positions can only host maximal projections and therefore complements, adjuncts and specifiers always have two bars. Figure 2.9 on the following page gives an overview of the minimal and maximal structure of phrases.

#### 2 Phrase structure grammar

Figure 2.9: Minimal and maximal structure of phrases

Some categories do not have a specifier or have the option of having one. Adjuncts are optional and therefore not all structures have to contain an X with an adjunct daughter. In addition to the branching shown in the right-hand figure, adjuncts to XP and headadjuncts are sometimes possible. There is only a single rule in (68) for cases in which a head precedes the complements; however, an order in which the complement precedes the head is, of course, also possible. This is shown in Figure 2.9.

Figure 2.10 on the next page shows the analysis of the NP structures *das Bild* 'the picture' and *das schöne Bild von Paris* 'the beautiful picture of Paris'. The NP structures in Figure 2.10 and the tree for *proud* in Figure 2.8 show examples of minimally populated structures. The left tree in Figure 2.10 is also an example of a structure without an adjunct. The right-hand structure in Figure 2.10 is an example for the maximally populated structure: specifier, adjunct, and complement are present.

The analysis given in Figure 2.10 assumes that all non-heads in a rule are phrases. One therefore has to assume that there is a determiner phrase even if the determiner is not combined with other elements. The unary branching of determiners is not elegant but it is consistent.<sup>13</sup> The unary branchings for the NP *Paris* in Figure 2.10 may also seem somewhat odd, but they actually become more plausible when one considers more complex noun phrases:

	- b. die the Maria Maria aus from Hamburg Hamburg 'Maria from Hamburg'

Unary projections are somewhat inelegant but this should not concern us too much here, as we have already seen in the discussion of the lexical entries in (56) that unary branching nodes can be avoided for the most part and that it is indeed desirable to avoid

<sup>13</sup>For an alternative version of X theory which does not assume elaborate structure for determiners see Muysken (1982).

Figure 2.10: X analysis of *das Bild* 'the picture' and *das schöne Bild von Paris* 'the beautiful picture of Paris'

such structures. Otherwise, one gets spurious ambiguities. In the following chapters, we will discuss approaches such as Categorial Grammar and HPSG, which do not assume unary rules for determiners, adjectives and nouns.

Furthermore, other X theoretical assumptions will not be shared by several theories discussed in this book. In particular, the assumption that non-heads always have to be maximal projections will be disregarded. Pullum (1985) and Kornai & Pullum (1990) have shown that the respective theories are not necessarily less restrictive than theories which adopt a strict version of the X theory. See also the discussion in Section 13.1.2.

#### 2 Phrase structure grammar

	- b. Joshi Joshi freut is.happy sich. refl 'Joshi is happy.'

# **Exercises**


$$\begin{array}{c} \text{(72)} \quad \text{a. } \mathsf{NP} \to \mathsf{Det}\,\overline{\mathsf{N}}\\ \text{b. } \overline{\mathsf{N}} \to \mathsf{N} \end{array}$$

c. Det →

d. N →

	- (73) NP → Modifier\* books Modifier\*

The rule in (73) combines an unlimited number of modifiers with the noun *books* followed by an unlimited number of modifiers. We can use this rule to derive phrases such as those in (74):

	- b. interesting books
	- c. interesting books from Stuttgart

Make reference to coordination data in your answer. Assume that symmetric coordination requires that both coordinated phrases or words have the same syntactic category.

	- (75) a. Examine the plight of the very poor.
		- b. Their outfits range from the flamboyant to the functional.
		- c. The unimaginable happened.

(76) shows a phrase structure rule that corresponds to their construction:

(76) NP → the Adj

Adj stands for something that can be a single word like *poor* or complex like *very poor*.

Revisit the German data in (45) and (46) and explain why such an analysis and even a more general one as in (77) would not extend to German.

(77) NP → Det Adj

7. Why can X theory not account for German adjective phrases without additional assumptions? (This task is for (native) speakers of German only.)

	- (78) a. Der the.nom Mann man hilft helps dem the.dat Kind. child 'The man helps the child.'
		- b. Er he.nom gibt gives ihr her.dat das the Buch. book 'He gives her the book.'
		- c. Er he.nom wartet waits auf on ein a Wunder. miracle 'He is waiting for a miracle.'
	- (79) a. \* Der the.nom Mann man hilft helps er. he.nom
		- b. \* Er he.nom gibt gives ihr her.dat den the.m Buch. book.n
	- (80) a. Der the.nom Mann man hilft helps dem the.dat Kind child jetzt. now 'The man helps the child now.'
		- b. Der the.nom Mann man hilft helps dem the.dat Kind child neben next.to dem the Bushäuschen. bus.shelter 'The man helps the child next to the bus shelter.'
		- c. Er he.nom gibt gives ihr her.dat das the.acc Buch book jetzt. now 'He gives her the book now.'
		- d. Er he.nom gibt gives ihr her.dat das the.acc Buch book neben next.to dem the Bushäuschen. bus.shelter 'He gives her the book next to the bus shelter.'
		- e. Er he.nom wartet waits jetzt now auf on ein a Wunder. miracle 'He is waiting for a miracle now.'

## **Further reading**

*a*

The expansion of phrase structure grammars to include features was proposed as early as 1963 by Harman (1963).

The phrase structure grammar for noun phrases discussed in this chapter covers a large part of the syntax of noun phrases but cannot explain certain NP structures. Furthermore, it has the problem, which exercise 3 is designed to show. A discussion of these phenomena and a solution in the framework of HPSG can be found in Netter (1998). For a discussion of the question whether Det or N is the head in nominal structures see Müller (2022) and Machicao y Priemer & Müller (2021). Van Eynde (2021) is an overview of work on the NP in HPSG.

The discussion of the integration of semantic information into phrase structure grammars was very short. A detailed discussion of predicate logic and its integration into phrase structure grammars – as well as a discussion of quantifier scope – can be found in Blackburn & Bos (2005).

https://swish.swi-prolog.org/, 2020-06-07.

*b* https://en.wikipedia.org/wiki/Definite\_clause\_grammar, 2020-06-07.

# **3 Transformational Grammar – Government & Binding**

Transformational Grammar and its subsequent incarnations (such as Government and Binding Theory and Minimalism) were developed by Noam Chomsky at MIT in Boston (Chomsky 1957, 1965, 1975, 1981a, 1986a, 1995b). Manfred Bierwisch (1963) was the first to implement Chomsky's ideas for German. In the 60s, the decisive impulse came from the *Arbeitsstelle Strukturelle Grammatik* 'Workgroup for Structural Grammar', which was part of the Academy of Science of the GDR. See Bierwisch 1992 and Vater 2010 for a historic overview. As well as Bierwisch's work, the following books focusing on German or the Chomskyan research program in general should also be mentioned: Fanselow (1987), Fanselow & Felix (1987), von Stechow & Sternefeld (1988), Grewendorf (1988), Haider (1993), Sternefeld (2006).

The different implementations of Chomskyan theories are often grouped under the heading *Generative Grammar*. This term comes from the fact that phrase structure grammars and the augmented frameworks that were suggested by Chomsky can generate sets of well-formed expressions (see p. 54). It is such a set of sentences that constitutes a language (in the formal sense) and one can test if a sentence forms part of a language by checking if a particular sentence is in the set of sentences generated by a given grammar. In this sense, simple phrase structure grammars and, with corresponding formal assumptions, GPSG, LFG, HPSG and Construction Grammar (CxG) are generative theories. In recent years, a different view of the formal basis of theories such as LFG, HPSG and CxG has emerged such that the aforementioned theories are now *model theoretic* theories rather than generative-enumerative ones<sup>1</sup> (See Chapter 14 for discussion). In 1965, Chomsky defined the term *Generative Grammar* in the following way (see also Chomsky 1995b: 162):

A grammar of a language purports to be a description of the ideal speaker-hearer's intrinsic competence. If the grammar is, furthermore, perfectly explicit – in other words, if it does not rely on the intelligence of the understanding reader but rather provides an explicit analysis of his contribution – we may call it (somewhat redundantly) a *generative grammar*. (Chomsky 1965: 4)

In this sense, all grammatical theories discussed in this book would be viewed as generative grammars. To differentiate further, sometimes the term *Mainstream Generative Grammar* (MGG) is used (Culicover & Jackendoff 2005: 3) for Chomskyan models. In this

<sup>1</sup>Model theoretic approaches are always constraint-based and the terms *model theoretic* and *constraint-based* are sometimes used synonymously.

chapter, I will discuss a well-developed and very influential version of Chomskyan grammar, GB theory. More recent developments following Chomsky's Minimalist Program are dealt with in Chapter 4.

# **3.1 General remarks on the representational format**

This section provides an overview of general assumptions. I introduce the concept of transformations in Section 3.1.1. Section 3.1.2 provides background information about assumptions regarding language acquisition, which shaped the theory considerably, Section 3.1.3 introduces the so-called T model, the basic architecture of GB theory. Section 3.1.4 introduces the X theory in the specific form used in GB and Section 3.1.5 shows how this version of the X theory can be applied to English. The discussion of the analysis of English sentences is an important prerequisite for the understanding of the analysis of German, since many analyses in the GB framework are modeled in parallel to the analyses of English. Section 3.1.6 introduces the analysis of German clauses in a parallel way to what has been done for English in Section 3.1.5.

## **3.1.1 Transformations**

In the previous chapter, I introduced simple phrase structure grammars. Chomsky (1957: Chapter 5) criticized this kind of rewrite grammars since – in his opinion – it is not clear how one can capture the relationship between active and passive sentences or the various ordering possibilities of constituents in a sentence. While it is of course possible to formulate different rules for active and passive sentences in a phrase structure grammar (e.g., one pair of rules for intransitive (1), one for transitive (2) and one for ditransitive verbs (3)), it would not adequately capture the fact that the same phenomenon occurs in the example pairs in (1)–(3):

	- b. weil because dort there noch still gearbeitet worked wurde aux 'because work was still being done there'
	- b. weil because der the Weltmeister world.champion geschlagen beaten wurde aux 'because the world champion was beaten'

(3) a. weil because der the Mann man der the Frau woman den the Schlüssel key stiehlt steals 'because the man is stealing the key from the woman' b. weil because der the Frau woman der the Schlüssel key gestohlen stolen wurde aux

'because the key was stolen from the woman'

Chomsky (1957: 43) suggests a transformation that creates a connection between active and passive sentences. The transformation that he suggests for English corresponds to (4), which is taken from Klenk (2003: 74):

(4) NP V NP → 3 [AUX be] 2en [PP [<sup>P</sup> by] 1] 1 2 3

This transformational rule maps a tree with the symbols on the left-hand side of the rule onto a tree with the symbols on the right-hand side of the rule. Accordingly, 1, 2 and 3 on the right of the rule correspond to symbols, which are above the numbers on the left-hand side. *en* stands for the morpheme which forms the participle (*seen*, *been*, …, but also *loved*). Both trees for (5a,b) are shown in Figure 3.1.

	- b. Sandy is loved by Kim.

Figure 3.1: Application of passive transformation

The symbols on the left of transformational rules do not necessarily have to be in a local tree, that is, they can be daughters of different mothers as in Figure 3.1.

Rewrite grammars were divided into four complexity classes based on the properties they have. The simplest grammars are assigned to the class 3, whereas the most complex are of Type-0. The so-called context-free grammars we have dealt with thus far

are of Type-2. Transformational grammars which allow symbols to be replaced by arbitrary other symbols are of Type-0 (Peters & Ritchie 1973). Research on the complexity of natural languages shows that the highest complexity level (Type-0) is too complex for natural language. It follows from this – assuming that one wants to have a restrictive formal apparatus for the description of grammatical knowledge (Chomsky 1965: 62) – that the form and potential power of transformations has to be restricted.<sup>2</sup> Another criticism of early versions of transformational grammar was that, due to a lack of restrictions, the way in which transformations interact was not clear. Furthermore, there were problems associated with transformations which delete material (see Peters & Ritchie 1973; Klenk 2003: Section 3.1.4). For this reason, new theoretical approaches such as Government & Binding (Chomsky 1981a) were developed. In this model, the form that grammatical rules can take is restricted (see Section 3.1.4). Elements moved by transformations are still represented in their original position, which makes them recoverable at the original position and hence the necessary information is available for semantic interpretation. There are also more general principles, which serve to restrict transformations.

After some initial remarks on the model assumed for language acquisition in GB theory, we will take a closer look at phrase structure rules, transformations and constraints.

# **3.1.2 The hypothesis regarding language acquisition: Principles & Parameters**

Chomsky (1965: Section I.8) assumes that linguistic knowledge must be innate since the language system is, in his opinion, so complex that it would be impossible to learn a language from the given input using more general cognitive principles alone (see also Section 13.8). If it is not possible to learn language solely through interaction with our environment, then at least part of our language ability must be innate. The question of exactly what is innate and if humans actually have an innate capacity for language remains controversial and the various positions on the question have changed over the course of the last decades. Some notable works on this topic are Pinker (1994), Tomasello (1995), Wunderlich (2004), Hauser, Chomsky & Fitch (2002), Chomsky (2007), and Pullum & Scholz (2001) and other papers in the same volume. For more on this discussion, see Chapter 13.

Chomsky (1981a) also assumes that there are general, innate principles which linguistic structure cannot violate. These principles are parametrized, that is, there are options. Parameter settings can differ between languages. An example for a parametrized principle is shown in (6):

(6) Principle: A head occurs before or after its complement(s) depending on the value of the parameter position.

The Principles & Parameters model (P&P model) assumes that a significant part of language acquisition consists of extracting enough information from the linguistic input in

<sup>2</sup> For more on the power of formal languages, see Chapter 17.

order to be able to set parameters. Chomsky (2000: 8) compares the setting of parameters to flipping a switch. For a detailed discussion of the various assumptions about language acquisition in the P&P-model, see Chapter 16. Speakers of English have to learn that heads occur before their complements in their language, whereas a speaker of Japanese has to learn that heads follow their complements. (7) gives the respective examples:

	- b. zibun refl -no from syasin-o picture mise-te showing iru be

As one can see, the Japanese verb, noun and prepositional phrases are a mirror image of the corresponding phrases in English. (8) provides a summary and shows the parametric value for the position parameter:


Investigating languages based on their differences with regard to certain assumed parameters has proven to be a very fruitful line of research in the last few decades and has resulted in an abundance of comparative cross-linguistic studies.

After these introductory comments on language acquisition, the following sections will discuss the basic assumptions of GB theory.

# **3.1.3 The T model**

Chomsky criticized simple PSGs for not being able to adequately capture certain correlations. An example of this is the relationship between active and passive sentences. In phrase structure grammars, one would have to formulate active and passive rules for intransitive, transitive and ditransitive verbs (see the discussion of (1)–(3) above). The fact that the passive can otherwise be consistently described as the suppression of the most prominent argument is not captured by phrase structure rules. Chomsky therefore assumes that there is an underlying structure, the so-called *deep structure*, and that other structures are derived from this. The general architecture of the so-called T model is discussed in the following subsections.

## **3.1.3.1 D-structure and S-structure**

During the derivation of new structures, parts of the deep structure can be deleted or moved. In this way, one can explain the relationship between active and passive sentences. As the result of this kind of manipulation of structures, also called transformations, one derives a new structure, the *surface structure*, from the original Deep Structure. Since the surface structure does not actually mirror the actual use of words in a sentence in some versions of the theory, the term *S-structure* is sometimes used instead as to avoid misunderstandings.

#### 3 Transformational Grammar – Government & Binding

(9) *surface structure* = S-structure *deep structure* = D-structure

Figure 3.2 gives an overview of the GB architecture: phrase structure rules and the lexicon license the D-structure from which the S-structure is derived by means of transformations. S-structure feeds into Phonetic Form (PF) and Logical Form (LF). The model is

Figure 3.2: The T model

referred to as the *T-model* (or Y-model) because D-structure, S-structure, PF and LF form an upside-down T (or Y). We will look at each of these individual components in more detail.

Using phrase structure rules, one can describe the relationships between individual elements (for instance words and phrases, sometimes also parts of words). The format for these rules is X syntax (see Section 2.5). The lexicon, together with the structure licensed by X syntax, forms the basis for D-structure. D-structure is then a syntactic representation of the selectional grid (= valence classes) of individual word forms in the lexicon.

The lexicon contains a lexical entry for every word which comprises information about morphophonological structure, syntactic features and selectional properties. This will be explained in more detail in Section 3.1.3.4. Depending on one's exact theoretical assumptions, morphology is viewed as part of the lexicon. Inflectional morphology is, however, mostly consigned to the realm of syntax. The lexicon is an interface for semantic interpretation of individual word forms.

The surface position in which constituents are realized is not necessarily the position they have in D-structure. For example, a sentence with a ditransitive verb has the following ordering variants:

(10) a. [dass] that der the.nom Mann man dem the.dat Kind woman das the.acc Buch book gibt gives 'that the man gives the woman the book'


The following transformational rules for the movements above are assumed: (10b) is derived from (10a) by fronting the verb, and (10c) is derived from (10b) by fronting the nominative noun phrase. In GB theory, there is only one very general transformation: Move = "Move anything anywhere!". The nature of what exactly can be moved where and for which reason is determined by principles. Examples of such principles are the Theta-Criterion and the Case Filter, which will be dealt with below.

The relations between a predicate and its arguments that are determined by the lexical entries have to be accessible for semantic interpretation at all representational levels. For this reason, the base position of a moved element is marked with a trace. This means, for instance, that the position in which the fronted *gibt* 'gives' originated is indicated in (11b). The respective marking is referred to as a *trace* or a *gap*. Such empty elements may be frightening when one encounters them first, but I already motivated the assumption of empty elements in nominal structures in Section 2.4.1 (page 68).

	- b. Gibt gives der the Mann man dem the Kind woman das the Buch book \_ ? 'Does the man give the woman the book?'
	- c. [Der the Mann] man gibt gives \_ dem the Kind woman das the Buch book \_ . 'The man gives the woman the book.'

(11c) is derived from (11a) by means of two movements, which is why there are two traces in (11c). The traces are marked with indices so it is possible to distinguish the moved constituents. The corresponding indices are then present on the moved constituents. Sometimes, *t* (for *trace*) is used to represent traces.

The S-structure derived from the D-structure is a surface-like structure but should not be equated with the structure of actual utterances.

### **3.1.3.2 Phonetic Form**

Phonological operations are represented at the level of Phonetic Form (PF). PF is responsible for creating the form which is actually pronounced. For example, so-called *wanna*contraction takes place at PF (Chomsky 1981a: 20–21).

	- b. The students wanna visit Paris.

The contraction in (12) is licensed by the optional rule in (13):

(13) want + to → wanna

#### **3.1.3.3 Logical Form**

Logical Form is the syntactic level which mediates between S-structure and the semantic interpretation of a sentence. Some of the phenomena which are dealt with by LF are anaphoric reference of pronouns, quantification and control.

Syntactic factors play a role in resolving anaphoric dependencies. An important component of GB theory is Binding Theory, which seeks to explain what a pronoun can or must refer to and when a reflexive pronoun can or must be used. (14) gives some examples of both personal and reflexive pronouns:

	- b. Peter Peter kauft buys eine a Tasche. bag(f) Er he gefällt likes ihm. him 'Peter is buying a bag. He likes it/him.'
	- c. Peter Peter kauft buys eine a Tasche. bag(f) Er he gefällt likes sich. himself 'Peter is buying a bag. He likes himself.'

In the first example, *er* 'he' can refer to either Peter, the table or something/someone else that was previously mentioned in the context. *ihm* 'him' can refer to Peter or someone in the context. Reference to the table is restricted by world knowledge. In the second example, *er* 'he' cannot refer to *Tasche* 'bag' since *Tasche* is feminine and *er* is masculine. *er* 'he' can refer to Peter only if *ihm* 'him' does not refer to Peter. *ihm* would otherwise have to refer to a person in the wider context. This is different in (14c). In (14c), *er* 'he' and *sich* 'himself' must refer to the same object. This is due to the fact that the reference of reflexives such as *sich* is restricted to a particular local domain. Binding Theory attempts to capture these restrictions.

LF is also important for quantifier scope. Sentences such as (15a) have two readings. These are given in (15b) and (15c).

	- b. ∀∃(ℎ() → (ℎ () ∧ (, )))
	- c. ∃∀ (ℎ() → (ℎ () ∧ (, )))

The symbol ∀ stands for a *universal quantifier* and ∃ stands for an *existential quantifier*. The first formula corresponds to the reading that for every dolphin, there is a shark that it attacks and in fact, these can be different sharks. Under the second reading, there is exactly one shark such that all dolphins attack it. The question of when such an ambiguity arises and which reading is possible when depends on the syntactic properties of the given utterance. LF is the level which is important for the meaning of determiners such as *a* and *every*.

Control Theory is also specified with reference to LF. Control Theory deals with the question of how the semantic role of the infinitive subject in sentences such as (16) is filled.

(16) a. Die the Professorin professor schlägt suggests der the Studentin student vor, part die the Klausur test noch yet mal once zu to schreiben. write

'The professor advises the student to take the test again.'


### **3.1.3.4 The lexicon**

The meaning of words tells us that they have to be combined with certain roles like "acting person" or "affected thing" when creating more complex phrases. For example, the fact that the verb *beat* needs two arguments belongs to its semantic contribution. The semantic representation of the contribution of the verb *beat* in (17a) is given in (17b):

	- b. *beat*′ (x,y)

Dividing heads into valence classes is also referred to as *subcategorization*: *beat* is subcategorized for a subject and an object. This term comes from the fact that a head is already categorized with regard to its part of speech (verb, noun, adjective, …) and then further subclasses (e.g., intransitive or transitive verb) are formed with regard to valence information. Sometimes the phrase *X subcategorizes for Y* is used, which means *X selects Y*. *beat* is referred to as the predicate since *beat*′ is the logical predicate. The subject and object are the arguments of the predicate. There are several terms used to describe the set of selectional requirements such as *argument structure*, *valence frame*, *subcategorization frame*, *thematic grid* and *theta-grid* or *-grid*. 3

Adjuncts modify semantic predicates and when the semantic aspect is emphasized they are also called *modifiers*. Adjuncts are not present in the argument structure of predicates.

<sup>3</sup> The exact meaning of the terms is framework-dependent. Coming from an HPSG perspective, I use the first three terms referring to syntactic and semantic information, the latter two refer to the selection of semantic roles. GB researchers often refer to argument structure as containing semantic information, to valence frames as containing syntactic information and to subcategorization as a mix of syntactic and semantic information.

Following GB assumptions, arguments occur in specific positions in the clause – in socalled argument positions (e.g., the sister of an X<sup>0</sup> element, see Section 2.5). The Theta-Criterion states that elements in argument positions have to be assigned a semantic role – a so-called theta-role – and each role can be assigned only once (Chomsky 1981a: 36):

### **Principle 1 (Theta-Criterion)**


The arguments of a head are ordered, that is, one can differentiate between higher- and lower-ranked arguments. The highest-ranked argument of verbs and adjectives has a special status. Since GB assumes that it is often (and always in some languages) realized in a position outside of the verb or adjective phrase, it is often referred to as the *external argument*. The remaining arguments occur in positions inside of the verb or adjective phrase. These kind of arguments are dubbed *internal arguments* or *complements*. For simple sentences, this often means that the subject is the external argument.

When discussing types of arguments, one can identify three classes of theta-roles:


If a verb has several theta-roles of this kind to assign, Class 1 normally has the highest rank, whereas Class 3 has the lowest. Unfortunately, the assignment of semantic roles to actual arguments of verbs has received a rather inconsistent treatment in the literature. This problem has been discussed by Dowty (1991), who suggests using proto-roles. An argument is assigned the proto-agent role if it has sufficiently many of the properties that were identified by Dowty as prototypical properties of agents (e.g., animacy, volitionality).

The mental lexicon contains *lexical entries* with the specific properties of syntactic words needed to use that word grammatically. Some of these properties are the following:


(18) shows an example of a lexical entry:


Assigning semantic roles to specific syntactic requirements (beneficiary = dative) is also called *linking*.

Arguments are ordered according to their ranking: the highest argument is furthest left. In the case of *helfen*, the highest argument is the external argument, which is why the agent is underlined. With so-called unaccusative verbs,<sup>4</sup> the highest argument is not treated as the external argument. It would therefore not be underlined in the corresponding lexical entry.

# **3.1.4 X theory**

In GB, it is assumed that all syntactic structures licensed by the core grammar<sup>5</sup> correspond to the X schema (see Section 2.5).<sup>6</sup> In the following sections, I will comment on the syntactic categories assumed and the basic assumptions with regard to the interpretation of grammatical rules.

### **3.1.4.1 Syntactic categories**

The categories which can be used for the variable X in the X schema are divided into lexical and functional categories. This correlates roughly with the difference between open and closed word classes. The following are lexical categories:


<sup>4</sup> See Perlmutter (1978) for a discussion of unaccusative verbs. The term *ergative verb* is also common, albeit a misnomer. See Burzio (1981, 1986) for the earliest work on unaccusatives in the Chomskyan framework and Grewendorf (1989) for German. Also, see Pullum (1988) on the usage of these terms and for a historical evaluation.

<sup>5</sup>Chomsky (1981a: 7–8) distinguishes between a regular area of language that is determined by a grammar that can be acquired using genetically determined language-specific knowledge and a periphery, to which irregular parts of language such as idioms (e.g., *to pull the wool over sb.'s eyes*) belong. See Section 16.3.

<sup>6</sup>Chomsky (1970: 210) allows for grammatical rules that deviate from the X schema. It is, however, common practice to assume that languages exclusively use X structures.

Lexical categories can be represented using binary features and a cross-classification:<sup>7</sup>


Table 3.1: Representation of four lexical categories using two binary features

Adverbs are viewed as intransitive prepositions and are therefore captured by the decomposition in the table above.

Using this cross-classification, it is possible to formulate generalizations. One can, for example, simply refer to adjectives and verbs: all lexical categories which are [ +V ] are either adjectives or verbs. Furthermore, one can say of [ +N ] categories (nouns and adjectives) that they can bear case.

Apart from this, some authors have tried to associate the head position with the feature values in Table 3.1 (see e.g., Grewendorf 1988: 52; Haftka 1996: 124; G. Müller 2011: 238). With prepositions and nouns, the head precedes the complement in German:

(19) a. *für* for Maria Maria

> b. *Bild* picture von of Maria Maria

With adjectives and verbs, the head is final:

	- b. der the [dem the Kind child *helfende*] helping Mann man 'the man helping the child'
	- c. dem the Mann man *helfen* help 'help the man'

This data seems to suggest that the head is final with [ +V ] categories and initial with [ −V ] categories. Unfortunately, this generalization runs into the problem that there are also postpositions in German. These are, like prepositions, not verbal, but do occur after the NP they require:

<sup>7</sup> See Chomsky (1970: 199) for a cross-classification of N, A and V, and Jackendoff (1977: Section 3.2) for a cross-classification that additionally includes P but has a different feature assignment.

	- b. die the Nacht night *über* during 'during the night'

Therefore, one must either invent a new category, or abandon the attempt to use binary category features to describe ordering restrictions. If one were to place postpositions in a new category, it would be necessary to assume another binary feature.<sup>8</sup> Since this feature can have either a negative or a positive value, one would then have four additional categories. There are then eight possible feature combinations, some of which would not correspond to any plausible category.

For functional categories, GB does not propose a cross-classification. Usually, the following categories are assumed:


#### **3.1.4.2 Assumptions and rules**

In GB, it is assumed that all rules must follow the X format discussed in Section 2.5. In other theories, rules which correspond to the X format are used along other rules which do not. If the strict version of X theory is assumed, this comes with the assumption of *endocentricity*: every phrase has a head and every head is part of a phrase (put more technically: every head projects to a phrase).

Furthermore, as with phrase structure grammars, it is assumed that the branches of tree structures cannot cross (*Non-Tangling Condition*). This assumption is made by the majority of theories discussed in this book. There are, however, some variants of

(i) a. auf on seinen his Sohn son stolz proud 'proud of his son' b. stolz proud auf on seinen his Sohn son

<sup>8</sup>Martin Haspelmath has pointed out that one could assume a rule that moves a post-head argument into a pre-head position (see van Riemsdijk 1978: 89 for the discussion of a transformational solution). This would be parallel to the realization of prepositional arguments of adjectives in German:

But note that the situation is different with postpositions here, while all adjectives that take prepositional objects allow for both orders, this is not the case for prepositions. Most prepositions do not allow their object to occur before them. It is an idiosyncratic feature of some postpositions that they want to have their argument to the left.

TAG, HPSG, Construction Grammar, and Dependency Grammar which allow crossing branches and therefore discontinuous constituents (Becker, Joshi & Rambow 1991, Reape 1994, Bergen & Chang 2005; Heringer 1996: 261; Eroms 2000: Section 9.6.2).

In X theory, one normally assumes that there are at most two projection levels (X′ and X′′). However, there are some versions of Mainstream Generative Grammar and other theories which allow three or more levels (Jackendoff 1977, Uszkoreit 1987). In this chapter, I follow the standard assumption that there are two projection levels, that is, phrases have at least three levels:

• X<sup>0</sup> = head


### **3.1.5 CP and IP in English**

Most work in Mainstream Generative Grammar is heavily influenced by previous publications dealing with English. If one wants to understand GB analyses of German and other languages, it is important to first understand the analyses of English and, for this reason, this will be the focus of this section. The CP/IP system is also assumed in LFG grammars of English and thus the following section also provides a foundation for understanding some of the fundamentals of LFG presented in Chapter 7.

In earlier work, the rules in (22a) and (22b) were proposed for English sentences (Chomsky 1981a: 19).

(22) a. S → NP VP b. S → NP Infl VP

Infl stands for*Inflection* as inflectional affixes are inserted at this position in the structure. The symbol AUX was also used instead of Infl in earlier work, since auxiliary verbs are treated in the same way as inflectional affixes. Figure 3.3 on the next page shows a sample analysis of a sentence with an auxiliary, which uses the rule in (22b).

Together with its complements, the verb forms a structural unit: the VP. The constituent status of the VP is supported by several constituent tests and further differences between subjects and objects regarding their positional restrictions.

The rules in (22) do not follow the X template since there is no symbol on the righthand side of the rule with the same category as one on the left-hand side, that is, there is no head. In order to integrate rules like (22) into the general theory, Chomsky (1986a: 3) developed a rule system with two layers above the verb phrase (VP), namely the CP/IP system. CP stands for *Complementizer Phrase*. The head of a CP can be a complementizer. Before we look at CPs in more detail, I will discuss an example of an IP in this new system. Figure 3.4 on the facing page shows an IP with an auxiliary in the I<sup>0</sup> position. As we can see, this corresponds to the structure of the X template: I<sup>0</sup> is a head, which takes the VP

Figure 3.3: Sentence with an auxiliary verb following Chomsky (1981a: 19)

as its complement and thereby forms I′ . The subject is the specifier of the IP. Another way to phrase this is to say that the subject is in the specifier position of the IP. This position is usually referred to as SpecIP.<sup>9</sup>

The sentences in (23) are analyzed as complementizer phrases (CPs), the complementizer is the head:

	- b. that Ann reads the newspaper

In sentences such as (23), the CPs do not have a specifier. Figure 3.5 on the next page shows the analysis of (23a).

Yes/no-questions in English such as those in (24) are formed by moving the auxiliary verb in front of the subject.

(24) Will Ann read the newspaper?

Let us assume that the structure of questions corresponds to the structure of sentences with complementizers. This means that questions are also CPs. Unlike the sentences in (23), however, there is no subordinating conjunction. In the D-structure of questions, the C<sup>0</sup> position is empty and the auxiliary verb is later moved to this position. Figure 3.6 shows an analysis of (24). The original position of the auxiliary is marked by the trace \_ , which is coindexed with the moved auxiliary.

<sup>9</sup> Sometimes SpecIP and similar labels are used in trees (for instance by Haegeman (1994), Meinunger (2000) and Lohnstein (2014)). I avoid this in this book since SpecIP, SpecAdvP are not categories like NP or AP or AdvP but positions that items of a certain category can take. See Chapter 2 on the phrase structure rules that license trees.

*wh*-questions are formed by the additional movement of a constituent in front of the auxiliary; that is, into the specifier position of the CP. Figure 3.7 on the facing page shows the analysis of (25):

#### (25) What will Ann read?

As before, the movement of the object of *read* is indicated by a trace. This is important when constructing the meaning of the sentence. The verb assigns some semantic role to the element in its object position. Therefore, one has to be able to "reconstruct" the fact that *what* actually originates in this position. This is ensured by coindexation of the trace with *what*.

Several ways to depict traces are used in the literature. Some authors assume a trace instead of the object NP as in Figure 3.8a (Grewendorf 1988: 249, 322; Haegeman 1994: 420). Others have the object NP in the tree and indicate the movement by a trace that is dominated by the NP as in Figure 3.8b (von Stechow & Sternefeld 1988: 376; Grewendorf 1988: 185; Haegeman 1994: 355; Sternefeld 2006: 333). The first proposal directly reflects the assumption that a complete phrase is moved and leaves a trace that represents the thing that is moved. If one thinks about the properties of the trace it is clear that it has the same category as the element that was at this position before movement. Hence the second way to represent the moved category is appropriate as well. Figure 3.8b basically says that the object that is moved is an NP but that there is nothing to pronounce. Given what was just said, the most appropriate way to represent movement would be the one in

Figure 3.8: Alternative ways of depicting movement: the moved constituent can be represented by a trace or by an XP dominating a trace

(b) XP with empty daughter

(a) Trace

Figure 3.8c. This picture is a mix of the two other pictures. The index is associated with the category and not with the empty phonology. In my opinion, this best depicts the fact that trace and filler are related. However, I never saw this way of depicting movement in the GB literature and hence I will stick to the more common notation in Figure 3.8b. This way to depict movement is also more similar to the representation that is used by all authors for the movement of words (so-called head-movement). For example the trace \_ , which stands for a moved I<sup>0</sup> in Figure 3.6 is never depicted as daughter of I′ but always as a daughter of I<sup>0</sup> .

(c) Mix of a and b

#### 3 Transformational Grammar – Government & Binding

Until now, I have not yet discussed sentences without auxiliaries such as (23b). In order to analyze this kind of sentences, it is usually assumed that the inflectional affix is present in the I<sup>0</sup> position. An example analysis is given in Figure 3.9. Since the in-

Figure 3.9: Sentence without auxiliary

flectional affix precedes the verb, some kind of movement operation still needs to take place. There are two suggestions in the literature: one is to assume lowering, that is, the affix moves down to the verb (Pollock 1989: 394, Chomsky 1991, Haegeman 1994: 110, 601**SKS2013a-u**). The alternative is to assume that the verb moves up to the affix (Fanselow & Felix 1987: 258–259). Since theories with lowering of inflectional affixes are complicated for languages in which the verb ultimately ends up in C (basically in all Germanic languages except English), I follow Fanselow & Felix's (1987: 258–259) suggestion for English and Grewendorf's (1995: 1289) suggestion for German and assume that the verb moves from V to I in English and from V to I to C in German.<sup>10</sup>

Following this excursus on the analysis of English sentences, we can now turn to German.

	- b. John carefully studies Russian.
	- c. \* John studies carefully Russian.

<sup>10</sup>**SKS2013a-u** argue for an affix lowering approach by pointing out that approaches assuming that the verb stem moves to I (their T) predict that adverbs appear to the right of the verb rather than to the left:

If the affix -*s* is in the position of the auxiliary and the verb moves to the affix, one would expect (i.c) to be grammatical rather than (i.b).

A third approach is to assume empty I (or more recently T) heads for present and past tense and have these heads select a fully inflected verb. See Carnie (2013: 220–221) for such an approach to English.

For German it was also suggested not to distinguish between I and V at all and treat auxiliaries like normal verbs (see footnote 11 below). In such approaches verbs are inflected as V, no I node is assumed (Haider 1993, 1997a).

## **3.1.6 The structure of the German clause**

The CP/IP model has been adopted by many scholars for the analysis of German.<sup>11</sup> The categories C, I and V, together with their specifier positions, can be linked to the topological fields as shown in Figure 3.10.

Figure 3.10: CP, IP and VP and the topological model of German

Note that SpecCP and SpecIP are not category symbols. They do not occur in grammars with rewrite rules. Instead, they simply describe positions in the tree.

As shown in Figure 3.10, it is assumed that the highest argument of the verb (the subject in simple sentences) has a special status. It is taken for granted that the subject always occurs outside of the VP, which is why it is referred to as the external argument. The VP itself does not have a specifier. In more recent work, however, the subject is generated in the specifier of the VP (Fukui & Speas 1986, Koopman & Sportiche 1991). In some languages, it is assumed that it moves to a position outside of the VP. In other languages such as German, this is the case at least under certain conditions (e.g., definiteness, see Diesing 1992). I am presenting the classical GB analysis here, where the subject

<sup>11</sup>For GB analyses without IP, see Bayer & Kornfilt (1990), Höhle (1991a: 157), Haider (1993, 1997a), Sternefeld (2006: Section IV.3), and Beck & Gergel (2014: 172). Haider assumes that the function of I is integrated into the verb. In LFG, an IP is assumed for English (Bresnan 2001: Section 6.2; Dalrymple 2001: Section 3.2.1), but not for German (Berman 2003a: Section 3.2.3.2). In HPSG, no IP is assumed.

is outside the VP. All arguments other than the subject are complements of the V, which are realized within the VP, that is, they are internal arguments. If the verb requires just one complement, then this is the sister of the head V<sup>0</sup> and the daughter of V′ according to the X schema. The accusative object is the prototypical complement.

Following the X template, adjuncts branch off above the complements of V′ . The analysis of a VP with an adjunct is shown in Figure 3.11.

(26) weil because der the Mann man morgen tomorrow den the Jungen boy trifft meets 'because the man is meeting the boy tomorrow'

Figure 3.11: Analysis of adjuncts in GB theory

# **3.2 Verb position**

In German, the positions of the heads of VP and IP (V<sup>0</sup> and I<sup>0</sup> ) are to the right of their arguments and V<sup>0</sup> and I<sup>0</sup> form part of the right sentence bracket. The subject and all other constituents (complements and adjuncts) all occur to the left of V<sup>0</sup> and I<sup>0</sup> and form the middle field. It is assumed that German – at least in terms of D-structure – is an SOV language (= a language with the base order Subject–Object–Verb). The analysis of German as an SOV language is almost as old as Transformational Grammar itself. It was originally proposed by Bierwisch (1963: 34).<sup>12</sup> Unlike German, Germanic languages like Danish, English and Romance languages like French are SVO languages, whereas

<sup>12</sup>Bierwisch attributes the assumption of an underlying verb-final order to Fourquet (1957). A German translation of the French manuscript cited by Bierwisch can be found in Fourquet (1970: 117–135). For other proposals, see Bach (1962), Reis (1974), Koster (1975) and Thiersch (1978: Chapter 1). Analyses which assume that German has an underlying SOV pattern were also suggested in GPSG (Jacobs 1986: 110), LFG (Berman 1996: Section 2.1.4) and HPSG (Kiss & Wesche 1991; Oliva 1992; Netter 1992; Kiss 1993; Frank 1994; Kiss 1995; Feldhaus 1997; Meurers 2000; Müller 2005b, 2023a).

Welsh and Arabic are VSO languages. Around 40 % of all languages belong to the SOV languages, around 35 % are SVO (Dryer 2013c).

The assumption of verb-final order as the base order is motivated by the following observations:<sup>13</sup>

	- (27) a. weil because sie she morgen tomorrow an-fängt part-starts 'because she is starting tomorrow'
		- b. Sie she fängt starts morgen tomorrow an. part 'She is starting tomorrow.'

This unit can only be seen in verb-final structures, which speaks for the fact that this structure reflects the base order.

Verbs which are derived from a noun by back-formation (e.g., *uraufführen* 'to perform something for the first time'), can often not be divided into their component parts and V2 clauses are therefore ruled out (This was first mentioned by Höhle (1991b: 2) in unpublished work (now published as Höhle (2019: 370–371)). The first published source is Haider (1993: 62)):

	- b. \* Sie they ur-auf-führen pref-part-lead heute today das the Stück. play
	- c. \* Sie they führen lead heute today das the Stück play ur-auf. pref-part

The examples show that there is only one possible position for this kind of verb. This order is the one that is assumed to be the base order.

	- (29) a. Der the Clown clown versucht, tries Kurt-Martin Kurt-Martin die the Ware goods zu to geben. give 'The clown is trying to give Kurt-Martin the goods.'
		- b. dass that der the Clown clown Kurt-Martin Kurt-Martin die the Ware goods gibt gives 'that the clown gives Kurt-Martin the goods'

<sup>13</sup>For points 1 and 2, see Bierwisch (1963: 34–36). For point 4 see Netter (1992: Section 2.3).

#### 3 Transformational Grammar – Government & Binding

	- (30) a. dass that er he ihn him gesehen<sup>3</sup> seen haben<sup>2</sup> have muss<sup>1</sup> must
		- b. at that han he må<sup>1</sup> must have<sup>2</sup> have set<sup>3</sup> seen ham him 'that he must have seen him'

	- b. Peter Peter liest reads wegen because.of der the Nachhilfestunden tutoring gut. well 'Peter can read well thanks to the tutoring.'

As Koster (1975: Section 6) and Reis (1980: 67) have shown, these are not particularly convincing counterexamples as the right sentence bracket is not filled in these examples and therefore the examples are not necessarily instances of normal reordering inside of the middle field, but could instead involve extraposition of the PP. As noted by Koster and Reis, these examples become ungrammatical if one fills the right bracket and does not extrapose the causal adjunct:

	- b. Hans Hans hat has gut well gelesen read wegen because.of der the Nachhilfestunden. tutoring 'Hans has been reading well because of the tutoring.'

However, the following example from Crysmann (2004: 383) shows that, even with the right bracket occupied, one can still have an order where an adjunct to the right has scope over one to the left:

(iii) Da there muß must es expl schon already erhebliche serious Probleme problems mit with der the Ausrüstung equipment gegeben given haben, have da since wegen because.of schlechten bad Wetters weather ein a Reinhold Reinhold Messner Messner niemals never aufgäbe. would.give.up 'There really must have been some serious problems with the equipment because someone like Reinhold Messner would never give up just because of some bad weather.'

Nevertheless, this does not change anything regarding the fact that the corresponding cases in (31) and (32) have the same scope relations regardless of the position of the verb. The general means of semantic composition may well have to be implemented in the same way as in Crysmann's analysis.

<sup>14</sup>At this point, it should be mentioned that there seem to be exceptions from the rule that modifiers to the left take scope over those to their right. Kasper (1994: 47) discusses examples such as (i), which go back to Bartsch & Vennemann (1972: 137).

	- b. dass that er he [nicht not [absichtlich intentionally lacht]] laughs 'that he is not laughing intentionally'

It is interesting to note that scope relations are not affected by verb position. If one assumes that sentences with verb-second order have the underlying structure in (31), then this fact requires no further explanation. (32) shows the derived Sstructure for (31):


'He is not laughing intentionally.'

After motivating and briefly sketching the analysis of verb-final order, I will now look at the CP/IP analysis of German in more detail. C<sup>0</sup> corresponds to the left sentence bracket and can be filled in two different ways: in subordinate clauses introduced by a conjunction, the subordinating conjunction (the complementizer) occupies C<sup>0</sup> as in English. The verb remains in the right sentence bracket, as illustrated by (33).

(33) dass that jeder everybody diese this Frau woman kennt knows 'that everybody knows this woman'

Figure 3.12 on the following page gives an analysis of (33). In verb-first and verb-second clauses, the finite verb is moved to C<sup>0</sup> via the I<sup>0</sup> position: V<sup>0</sup> → I <sup>0</sup> → C 0 (Grewendorf 1995: 1289). Figure 3.13 on page 107 shows the analysis of (34):

(34) Kennt knows jeder everybody diese this Frau? woman 'Does everybody know this woman?'

The C<sup>0</sup> position is empty in the D-structure of (34). Since it is not occupied by a complementizer, the verb can move there.

# **3.3 Long-distance dependencies**

The SpecCP position corresponds to the prefield and can be filled by any XP in declarative clauses in German. In this way, one can derive the sentences in (36) from (35) by moving a constituent in front of the verb:

Figure 3.12: Sentence with a complementizer in C<sup>0</sup>

	- b. Dem the.dat Kind child gibt gives der the.nom Mann man jetzt now den the.acc Mantel. coat
	- c. Den the.acc Mantel coat gibt gives der the.nom Mann man dem the.dat Kind child jetzt. now
	- d. Jetzt now gibt gives der the.nom Mann man dem the.dat Kind child den the.acc Mantel. coat

Since any constituent can be placed in front of the finite verb, German is treated typologically as one of the verb-second languages (V2). Thus, it is a verb-second language with SOV base order. English, on the other hand, is an SVO language without the V2 property, whereas Danish is a V2 language with SVO as its base order (see Ørsnes 2009a for Danish).

Figure 3.13: Verb position in GB

Figure 3.14 on the following page shows the structure derived from Figure 3.13. The crucial factor for deciding which phrase to move is the *information structure* of the sentence. That is, material connected to previously mentioned or otherwise-known information is placed further left (preferably in the prefield) and new information tends to occur to the right. Fronting to the prefield in declarative clauses is often referred to as *topicalization*. But this is rather a misnomer, since the focus (informally: the constituent being asked for) can also occur in the prefield. Furthermore, expletive pronouns can occur there and these are non-referential and as such cannot be linked to preceding or known information, hence expletives can never be topics.

Transformation-based analyses also work for so-called *long-distance dependencies*, that is, dependencies crossing several phrase boundaries:

(37) a. [Um around zwei two Millionen million Mark] Deutsche.Marks soll should er he versucht tried haben, have [eine an Versicherung insurance.company \_ zu to betrügen].<sup>15</sup> deceive

> 'He apparently tried to cheat an insurance company out of two million Deutsche Marks.'

<sup>15</sup>taz, 04.05.2001, p. 20.

Figure 3.14: Fronting in GB theory


'It is, however, more difficult for the Republicans to launch attacks against him.'

The elements in the prefield in the examples in (37) all originate from more deeply embedded phrases. In GB, it is assumed that long-distance dependencies across sentence boundaries are derived in steps (Grewendorf 1988: 75–79), that is, in the analysis of

<sup>16</sup>Spiegel, 8/1999, p. 18.

<sup>17</sup>Scherpenisse (1986: 84).

<sup>18</sup>taz, 08.02.2008, p. 9.

(37c), the interrogative pronoun is moved to the specifier position of the *dass*-clause and is moved from there to the specifier of the matrix clause. The reason for this is that there are certain restrictions on movement which must be checked locally.

# **3.4 Passive**

Before I turn to the analysis of the passive in Section 3.4.2, the first subsection will elaborate on the differences between structural and lexical case.

## **3.4.1 Structural and lexical case**

The case of many case-marked arguments is dependent on the syntactic environment in which the head of the argument is realized. These arguments are referred to as arguments with *structural case*. Case-marked arguments, which do not bear structural case, are said to have *lexical case*. 19

The following are examples of structural case:<sup>20</sup>

	- b. Der the Mann man lässt lets den the.acc Installateur plumber kommen. come 'The man is getting the plumber to come.'
	- c. das the Kommen coming des of.the Installateurs plumber 'the plumber's visit'

In the first example, the subject is in the nominative case, whereas *Installateur* 'plumber' is in accusative in the second example and even in the genitive in the third following nominalization. The accusative case of objects is normally structural case. This case becomes nominative under passivization and genitive in nominalizations:

(39) a. Judit Judit schlägt beats den the.acc Weltmeister. world.champion 'Judit beats the world champion.'

<sup>19</sup>Furthermore, there is a so-called *agreeing case* (see page 41) and *semantic case*. Agreeing case is found in predicatives. This case also changes depending on the structure involved, but the change is due to the antecedent element changing its case. Semantic case depends on the function of certain phrases (e.g., temporal accusative adverbials). Furthermore, as with lexical case of objects, semantic case does not change depending on the syntactic environment. For the analysis of the passive, which will be discussed in this section, only structural and lexical case will be relevant.

<sup>20</sup>Compare Heinz & Matiasek (1994: 200).

<sup>(38</sup>b) is a so-called AcI construction. AcI stands for *Accusativus cum infinitivo*, which means "accusative with infinitive". The logical subject of the embedded verb (*kommen* 'to come' in this case) becomes the accusative object of the matrix verb *lassen* 'to let'. Examples for AcI-verbs are perception verbs such as *hören* 'to hear' and *sehen* 'to see' as well as *lassen* 'to let'.


Unlike the accusative, the genitive governed by a verb is a lexical case. The case of a genitive object does not change when the verb is passivized.

	- b. Der the.gen Opfer victims wird aux gedacht. remembered 'The victims are being remembered.'

(40b) is an example of the so-called *impersonal passive*. Unlike example (39b), where the accusative object became the subject, there is no subject in (40b). See Section 1.7.1. Similarly, there is no change in case with dative objects:

(41) a. Der the Mann man hat has ihm him.dat geholfen. helped 'The man has helped him.'

> b. Ihm him.dat wird aux geholfen. helped 'He is being helped.'

It still remains controversial as to whether all datives should be treated as lexical or whether some or all of the datives in verbal environments should be treated as instances of structural case. For reasons of space, I will not recount this discussion but instead refer the interested reader to Chapter 14 of Müller (2007a). In what follows, I assume – like Haider (1986a: 20) – that the dative is in fact a lexical case.

# **3.4.2 Case assignment and the Case Filter**

In GB, it is assumed that the subject receives case from (finite) I and that the case of the remaining arguments comes from V (Chomsky 1981a: 50; Haider 1984: 26; Fanselow & Felix 1987: 71–73).

## **Principle 2 (Case Principle)**


The Case Filter rules out structures where case has not been assigned to an NP.

Figure 3.15 shows the Case Principle in action with the example in (42a).<sup>21</sup>

	- b. [dass] that der the Junge boy.nom der the.dat Frau woman gezeigt shown wird aux 'that the boy is shown to the woman'

Figure 3.15: Case and theta-role assignment in active clauses

The passive morphology blocks the subject and absorbs the structural accusative. The object that would get accusative in the active receives only a semantic role in its base position in the passive, but it does not get the absorbed case. Therefore, it has to move to a position where case can be assigned to it (Chomsky 1981a: 124). Figure 3.16 shows how this works for example (42b). This movement-based analysis works well for English since the underlying object always has to move:

<sup>21</sup>The figure does not correspond to X theory in its classic form, since *der Frau* 'the woman' is a complement which is combined with V′ . In classical X theory, all complements have to be combined with V<sup>0</sup> . This leads to a problem in ditransitive structures since the structures have to be binary (see Larson (1988) for a treatment of double object constructions). Furthermore, in the following figures the verb has been left in V 0 for reasons of clarity. In order to create a well-formed S-structure, the verb would have to move to its affix in I<sup>0</sup> . Note also that the assignment of the subject theta-role by the verb crosses a phrase boundary. This problem can be solved by assuming that the subject is generated within the VP, gets a theta role there and then moves to SpecIP. An alternative suggestion was to assume that the VP assigns a semantic role to SpecIP (Chomsky 1981a: 104–105, Aoun & Sportiche 1983: 229).

Figure 3.16: Case and theta-role assignment in passive clauses

	- b. [The girl] was given [a cookie] (by the mother).
	- c. \* It was given [the girl] [a cookie].

(43c) shows that filling the subject position with an expletive is not possible, so the object really has to move. However, Lenerz (1977: Section 4.4.3) showed that such a movement is not obligatory in German. (44) illustrates:

	- b. weil because dem the.dat Jungen boy der the.nom Ball ball geschenkt given wurde aux 'because the ball was given to the boy'
	- c. weil because der the.nom Ball ball dem the.dat Jungen boy geschenkt given wurde aux

In comparison to (44c), (44b) is the unmarked order. *der Ball* 'the ball' in (44b) occurs in the same position as *den Ball* in (44a), that is, no movement is necessary. Only the case differs. (44c) is, however, somewhat marked in comparison to (44b). So, if one assumed (44c) to be the normal order for passives and (44b) is derived from this by movement of *dem Jungen* 'the boy', (44b) should be more marked than (44c), contrary to the facts. To solve this problem, an analysis involving abstract movement has been proposed for cases such as (44b): the elements stay in their positions, but are connected to the subject position and receive their case information from there. Grewendorf (1988: 155–157, 1995: 1311) assumes that there is an empty expletive pronoun in the subject position of sentences such as (44b) as well as in the subject position of sentences with an impersonal passive such as (45):<sup>22</sup>

(45) weil because heute today nicht not gearbeitet worked wird aux 'because there will be no work done today'

A silent expletive pronoun is something that one cannot see or hear and that does not carry any meaning. For discussion of this kind of empty element, see Section 13.1.3 and Chapter 19 and Müller (2022: Section 7).

In the following chapters, I describe alternative treatments of the passive that do without mechanisms such as empty elements that are connected to argument positions and that seek to describe the passive in a more general, cross-linguistically consistent manner as the suppression of the most prominent argument.

A further question which needs to be answered is why the accusative object does not receive case from the verb. This is captured by a constraint, which goes back to Burzio (1986: 178–185) and is therefore referred to as *Burzio's Generalization*. 23

(46) Burzio's Generalization (modified):

If V does not have an external argument, then it does not assign (structural) accusative case.

Koster (1986: 12) has pointed out that the passive in English cannot be derived by Case Theory since if one allowed empty expletive subjects for English as well as German and Dutch, then it would be possible to have analyses such as the following in (47) where np is an empty expletive:

(i) Mich me.acc friert. freezes 'I am freezing.'

(ii) Peter Peter begegnete met einem a.dat Mann. man 'Peter met a man.'

<sup>22</sup>See Koster (1986: 11–12) for a parallel analysis for Dutch as well as Lohnstein (2014: 180) for a movementbased account of the passive that also involves an empty expletive for the analysis of the impersonal passive. <sup>23</sup>Burzio's original formulation was equivalent to the following: a verb assigns accusative if and only if it assigns a semantic role to its subject. This claim is problematic from both sides. In (i), the verb does not

assign a semantic role to the subject; however there is nevertheless accusative case:

One therefore has to differentiate between structural and lexical accusative and modify Burzio's Generalization accordingly. The existence of verbs like *begegnen* 'to bump into' is problematic for the other side of the implication. *begegnen* has a subject but still does not assign accusative but rather dative:

See Haider (1999) and Webelhuth (1995: 89) as well as the references cited there for further problems with Burzio's Generalization.

(47) np was read the book.

Koster rather assumes that subjects in English are either bound by other elements (that is, non-expletive) or lexically filled, that is, filled by visible material. Therefore, the structure in (47) would be ruled out and it would be ensured that *the book* would have to be placed in front of the finite verb so that the subject position is filled.

# **3.5 Local reordering**

Arguments in the middle field can, in principle, occur in an almost arbitrary order. (48) exemplifies this:

	- b. [weil] because der the Mann man das the Buch book dem the Kind child gibt gives
	- c. [weil] because das the Buch book der the Mann man dem the Kind child gibt gives
	- d. [weil] because das the Buch book dem the Kind child der the Mann man gibt gives
	- e. [weil] because dem the Kind child der the Mann man das the Buch book gibt gives
	- f. [weil] because dem the Kind child das the Buch book der the Mann man gibt gives

In (48b–f), the constituents receive different stress and the number of contexts in which each sentence can be uttered is more restricted than in (48a) (Höhle 1982). The order in (48a) is therefore referred to as the *neutral order* or *unmarked order*.

Two proposals have been made for analyzing these orders: the first suggestion assumes that the five orderings in (48b–f) are derived from a single underlying order by means of Move- (Frey 1993). As an example, the analysis of (48c) is given in Figure 3.17 on the next page. The object *das Buch* 'the book' is moved to the left and adjoined to the topmost IP.

An argument that has often been used to support this analysis is the fact that scope ambiguities exist in sentences with reorderings which are not present in sentences in the base order. The explanation of such ambiguities comes from the assumption that the scope of quantifiers can be derived from their position in the surface structure as well as their position in the deep structure. If the position in both the surface and deep structure are the same, that is, when there has not been any movement, then there is only one reading possible. If movement has taken place, however, then there are two possible readings (Frey 1993: 185):

Figure 3.17: Analysis of local reordering as adjunction to IP

(49) a. Es it ist is nicht not der the Fall, case daß that er he mindestens at.least einem one Verleger publisher fast almost jedes every Gedicht poem anbot.

offered

'It is not the case that he offered at least one publisher almost every poem.'

	- offered

'It is not the case that he offered almost every poem to at least one publisher.'

It turns out that approaches assuming traces run into problems as they predict certain readings for sentences with multiple traces which do not exist (see Kiss 2001: 146 and Fanselow 2001: Section 2.6). For instance in an example such as (50), it should be possible to interpret *mindestens einem Verleger* 'at least one publisher' at the position of \_ , which would lead to a reading where *fast jedes Gedicht* 'almost every poem' has scope over *mindestens einem Verleger* 'at least one publisher'. However, this reading does not exist.

(50) Ich I glaube, believe dass that mindestens at.least einem one Verleger publisher fast almost jedes every Gedicht poem nur only dieser this

Dichter poet \_ \_ angeboten offered hat. has 'I think that only this poet offered almost every poem to at least one publisher.'

Sauerland & Elbourne (2002: 308) discuss analogous examples from Japanese, which they credit to Kazuko Yatsushiro. They develop an analysis where the first step is to move the accusative object in front of the subject. Then, the dative object is placed in front of that and then, in a third movement, the accusative is moved once more. The last movement can take place to construct either the S-structure<sup>24</sup> or as a movement to construct the Phonological Form. In the latter case, this movement will not have any semantic effects. While this analysis can predict the correct available readings, it does require a number of additional movement operations with intermediate steps.

The alternative to a movement analysis is so-called *base generation*: the starting structure generated by phrase structure rules is referred to as the *base*. One variant of base generation assumes that the verb is combined with one argument at a time and each role is assigned in the respective head-argument configuration. The order in which arguments are combined with the verb is not specified, which means that all of the orders in (48) can be generated directly without any transformations.<sup>25</sup> Fanselow (2001) suggested such an analysis within the framework of GB.<sup>26</sup> Note that such a base-generation analysis is incompatible with an IP approach that assumes that the subject is realized in the specifier of IP. An IP approach with base-generation of different argument orders would allow the complements to appear in any order within the VP but the subject would be first since it is part of a different phrase. So the orders in (51a,b) could be analyzed, but the ones in (51c–f) could not:

	- b. dass that der the.nom Mann man ein a.acc Buch book dem the.dat Kind child gibt gives
	- c. dass that dem the.dat Kind child der the.nom Mann man ein a.acc Buch book gibt gives
	- d. dass that dem the.dat Kind child ein a.acc Buch book der the.nom Mann man gibt gives

<sup>24</sup>The authors are working in the Minimalist framework. This means there is no longer S-structure strictly speaking. I have simply translated the analysis into the terms used here.

<sup>25</sup>Compare this to the grammar in (6) on page 55. This grammar combines a V and an NP to form a new V. Since nothing is said about the case of the argument in the phrase structure rule, the NPs can be combined with the verb in any order.

<sup>26</sup>The base generation analysis is the natural analysis in the HPSG framework. It has already been developed by Gunji in 1986 for Japanese and will be discussed in more detail in Section 9.4. Sauerland & Elbourne (2002: 313–314) claim that they show that syntax has to be derivational, that is, a sequence of syntactic trees has to be derived. I am of the opinion that this cannot generally be shown to be the case. There is, for example, an analysis by Kiss (2001) which shows that scope phenomena can be explained well by constraint-based approaches.


For the discussion of different approaches to describing constituent position, see Fanselow (1993).

# **3.6 Summary and classification**

This section is for advanced readers. It compares GB with theories introduced later in the book. So I suggest coming back here after reading Chapters 4–12.

Works in GB and some contributions to the Minimalist Program (see Chapter 4) have led to a number of new discoveries in both language-specific and cross-linguistic research. In the following, I will focus on some aspects of German syntax.

The analysis of verb movement developed in Transformational Grammar by Bierwisch (1963: 34), Reis (1974), Koster (1975), Thiersch (1978: Chapter 1) and den Besten (1983) has become the standard analysis in almost all grammar models (possibly with the exception of Construction Grammar and Dependency Grammar).

The work by Lenerz (1977) on constituent order has influenced analyses in other frameworks (the linearization rules in GPSG and HPSG go back to Lenerz' descriptions). Haider's work on constituent order, case and passive (1984, 1985c, 1985b, 1986a, 1990, 1993) has had a significant influence on LFG and HPSG analyses of German.

The entire configurationality discussion, that is, whether it is better to assume that the subject of finite verbs in German is inside or outside the VP, was important (for instance Haider 1982, Grewendorf 1983, Kratzer 1984, 1996, Webelhuth 1985, Sternefeld 1985b, Scherpenisse 1986, Fanselow 1987, Grewendorf 1988, Dürscheid 1989, Webelhuth 1990, Oppenrieder 1991, Wilder 1991, Haider 1993, Grewendorf 1995, Frey 1993, Lenerz 1994, Meinunger 2000) and German unaccusative verbs received their first detailed discussion in GB circles (Grewendorf 1989, Fanselow 1992a). The works by Fanselow and Frey on constituent order, in particular with regard to information structure, have advanced German syntax quite considerably (Fanselow 1988, 1990, 1993, 2000a, 2001, 2003b,c, 2004a, Frey 2000, 2001, 2004b, 2005). Infinitive constructions, complex predicates and partial fronting have also received detailed and successful treatments in the GB/MP frameworks (Bierwisch 1963, Evers 1975, Haider 1982, 1986b, 1991b,a, 1993, Grewendorf 1983, 1987, 1988, den Besten 1985, Sternefeld 1985b, Fanselow 1987, 2002, von Stechow & Sternefeld 1988, Bayer & Kornfilt 1990; G. Müller 1996a, 1998, Vogel & Steinbach 1998). In the area of secondary predication, the work by Winkler (1997) is particularly noteworthy.

This list of works from subdisciplines of grammar is somewhat arbitrary (it corresponds more or less to my own research interests) and is very much focused on German. There are, of course, a wealth of other articles on other languages and phenomena, which should be recognized without having to be individually listed here.

In the remainder of this section, I will critically discuss two points: the model of language acquisition of the Principles & Parameters framework and the degree of formalization inside Chomskyan linguistics (in particular the last few decades and the consequences this has). Some of these points will be mentioned again in Part II.

### **3.6.1 Explaining language acquisition**

One of the aims of Chomskyan research on grammar is to explain language acquisition. In GB, one assumed a very simple set of rules, which was the same for all languages (X theory), as well as general principles that hold for all languages, but which could be parametrized for individual languages or language classes. It was assumed that a parameter was relevant for multiple phenomena. The Principles & Parameters model was particularly fruitful and led to a number of interesting studies in which commonalities and differences between languages were uncovered. From the point of view of language acquisition, the idea of a parameter which is set according to the input has often been cricitized as it cannot be reconciled with observable facts: after setting a parameter, a learner should have immediately mastered certain aspects of that language. Chomsky (1986b: 146) uses the metaphor of switches which can be flipped one way or the other. As it is assumed that various areas of grammar are affected by parameters, setting one parameter should have a significant effect on the rest of the grammar of a given learner. However, the linguistic behavior of children does not change in an abrupt fashion as would be expected (Bloom 1993: 731; Haider 1993: 6; Abney 1996: 3; Ackerman & Webelhuth 1998: Section 9.1; Tomasello 2000, 2003). Furthermore, it has not been possible to prove that there is a correlation between a certain parameter and various grammatical phenomena. For more on this, see Chapter 16.

The Principles & Parameters model nevertheless remains interesting for cross-linguistic research. Every theory has to explain why the verb precedes its objects in English and follows them in Japanese. One can name this difference a parameter and then classify languages accordingly, but whether this is actually relevant for language acquisition is being increasingly called in question.

### **3.6.2 Formalization**

In his 1963 work on Transformational Grammar, Bierwisch writes the following:<sup>27</sup>

It is very possible that the rules that we formulated generate sentences which are outside of the set of grammatical sentences in an unpredictable way, that is, they

<sup>27</sup>Es ist also sehr wohl möglich, daß mit den formulierten Regeln Sätze erzeugt werden können, die auch in einer nicht vorausgesehenen Weise aus der Menge der grammatisch richtigen Sätze herausfallen, die also durch Eigenschaften gegen die Grammatikalität verstoßen, die wir nicht wissentlich aus der Untersuchung ausgeschlossen haben. Das ist der Sinn der Feststellung, daß eine Grammatik eine Hypothese über die Struktur einer Sprache ist. Eine systematische Überprüfung der Implikationen einer für natürliche Sprachen angemessenen Grammatik ist sicherlich eine mit Hand nicht mehr zu bewältigende Aufgabe. Sie könnte vorgenommen werden, indem die Grammatik als Rechenprogramm in einem Elektronenrechner realisiert wird, so daß überprüft werden kann, in welchem Maße das Resultat von der zu beschreibenden Sprache abweicht.

violate grammaticality due to properties that we did not deliberately exclude in our examination. This is meant by the statement that a grammar is a hypothesis about the structure of a language. A systematic check of the implications of a grammar that is appropriate for natural languages is surely a task that cannot be done by hand any more. This task could be solved by implementing the grammar as a calculating task on a computer so that it becomes possible to verify to which degree the result deviates from the language to be described. (Bierwisch 1963: 163)

Bierwisch's claim is even more valid in light of the empirical progress made in the last decades. For example, Ross (1967) identified restrictions for movement and long-distance dependencies and Perlmutter (1978) discovered unaccusative verbs in the 70s. For German, see Grewendorf (1989) and Fanselow (1992a). Apart from analyses of these phenomena, restrictions on possible constituent positions have been developed (Lenerz 1977), as well as analyses of case assignment (Yip, Maling & Jackendoff 1987, Meurers 1999c, Przepiórkowski 1999b) and theories of verbal complexes and the fronting of parts of phrases (Evers 1975, Grewendorf 1988, Hinrichs & Nakazawa 1994a, Kiss 1995; G. Müller 1998; Meurers 1999b; Müller 1999b, 2002a; De Kuthy 2002). All these phenomena interact! Consider another quote:

A goal of earlier linguistic work, and one that is still a central goal of the linguistic work that goes on in computational linguistics, is to develop grammars that assign a reasonable syntactic structure to every sentence of English, or as nearly every sentence as possible. This is not a goal that is currently much in fashion in theoretical linguistics. Especially in Government-Binding theory (GB), the development of large fragments has long since been abandoned in favor of the pursuit of deep principles of grammar. The scope of the problem of identifying the correct parse cannot be appreciated by examining behavior on small fragments, however deeply analyzed. Large fragments are not just small fragments several times over – there is a qualitative change when one begins studying large fragments. As the range of constructions that the grammar accommodates increases, the number of undesired parses for sentences increases dramatically. (Abney 1996: 20)

So, as Bierwisch and Abney point out, developing a sound theory of a large fragment of a human language is a really demanding task. But what we aim for as theoretical linguists is much more: the aim is to formulate restrictions which ideally hold for all languages or at least for certain language classes. It follows from this, that one has to have an overview of the interaction of various phenomena in not just one but several languages. This task is so complex that individual researchers cannot manage it. This is the point at which computer implementations become helpful as they immediately flag inconsistencies in a theory. After removing these inconsistencies, computer implementations can be used to systematically analyze test data or corpora and thereby check the empirical adequacy of the theory (Müller, 1999b: Chapter 22; 2015c; 2014c; Oepen & Flickinger 1998; Bender 2008b, see Section 1.2).

More than 60 years after the first important published work by Chomsky, it is apparent that there has not been one large-scale implemented grammatical fragment on the

#### 3 Transformational Grammar – Government & Binding

basis of Transformational Grammar analyses. Chomsky has certainly contributed to the formalization of linguistics and developed important formal foundations which are still relevant in the theory of formal languages in computer science and in theoretical computational linguistics (Chomsky 1959). However, in 1981, he had already turned his back on rigid formalization:

I think that we are, in fact, beginning to approach a grasp of certain basic principles of grammar at what may be the appropriate level of abstraction. At the same time, it is necessary to investigate them and determine their empirical adequacy by developing quite specific mechanisms. We should, then, try to distinguish as clearly as we can between discussion that bears on leading ideas and discussion that bears on the choice of specific realizations of them. (Chomsky 1981a: 2–3)

This is made explicit in a letter to *Natural Language and Linguistic Theory*:

Even in mathematics, the concept of formalization in our sense was not developed until a century ago, when it became important for advancing research and understanding. I know of no reason to suppose that linguistics is so much more advanced than 19th century mathematics or contemporary molecular biology that pursuit of Pullum's injunction would be helpful, but if that can be shown, fine. For the present, there is lively interchange and exciting progress without any sign, to my knowledge, of problems related to the level of formality of ongoing work. (Chomsky 1990: 146)

This departure from rigid formalization has led to there being a large number of publications inside Mainstream Generative Grammar with sometimes incompatible assumptions to the point where it is no longer clear how one can combine the insights of the various publications. An example of this is the fact that the central notion of government has several different definitions (see Aoun & Sportiche 1983 for an overview28).

This situation has been cricitized repeatedly since the 80s and sometimes very harshly by proponents of GPSG (Gazdar, Klein, Pullum & Sag 1985: 6; Pullum 1985, 1989a; Pullum 1991: 48; Kornai & Pullum 1990).

The lack of precision and working out of the details<sup>29</sup> and the frequent modification of basic assumptions<sup>30</sup> has led to insights gained by Mainstream Generative Grammar rarely being translated into computer implementations. There are some implementations that are based on Transformational Grammar/GB/MP models or borrow ideas from Mainstream Generative Grammar (Petrick 1965, Zwicky, Friedman, Hall & Walker 1965, Kay 1967, Friedman 1969, Friedman, Bredt, Doran, Pollack & Martner 1971, Plath 1973, Morin 1973, Marcus 1980, Abney & Cole 1986, Kuhns 1986, Correa 1987, Stabler 1987,

<sup>28</sup>A further definition can be found in Aoun & Lightfoot (1984). This is, however, equivalent to an earlier version as shown by Postal & Pullum (1986: 104–106).

<sup>29</sup>See e.g., Kuhns (1986: 550), Crocker & Lewin (1992: 508), Kolb & Thiersch (1991: 262), Kolb (1997: 3) and Freidin (1997: 580), Veenstra (1998: 25, 47), Lappin et al. (2000a: 888) and Stabler (2011a: 397, 399, 400) for the latter.

<sup>30</sup>See e.g., Kolb (1997: 4), Fanselow (2009) and the quote from Stabler on page 177.

1992, 2001, Kolb & Thiersch 1991, Fong 1991, Crocker & Lewin 1992, Lohnstein 1993, Lin 1993, Fordham & Crocker 1994, Nordgård 1994, Veenstra 1998, Fong & Ginsburg 2012),<sup>31</sup> but these implementations often do not use transformations or differ greatly from the theoretical assumptions of the publications. For example, Marcus (1980: 102–104) and Stabler (1987: 5) use special purpose rules for auxiliary inversion.<sup>32</sup> These rules reverse the order of *John* and *has* for the analysis of sentences such as (52a) so that we get the order in (52b), which is then parsed with the rules for non-inverted structures.

	- b. John has scheduled the meeting for Wednesday?

These rules for auxiliary inversion are very specific and explicitly reference the category of the auxiliary. This does not correspond to the analyses proposed in GB in any way. As we have seen in Section 3.1.5, there are no special transformational rules for auxiliary inversion. Auxiliary inversion is carried out by the more general transformation Move and the associated restrictive principles. It is not unproblematic that the explicit formulation of the rule refers to the category *auxiliary* as is clear when one views Stabler's GB-inspired phrase structure grammar:

	- b. s([First|L0],L,X0,X) :- aux\_verb(First), np(L0,L1,X0,X1), vp([First|L1],L,X1,X).

The rule in (53a) is translated into the Prolog predicate in (53b). The expression [First|L0] after the s corresponds to the string, which is to be processed. The '|'-operator divides the list into a beginning and a rest. *First* is the first word to be processed and L0 contains all other words. In the analysis of (52a), First is *has* and L0 is *John scheduled the meeting for Wednesday*. In the Prolog clause, it is then checked whether First is an auxiliary (aux\_verb(First)) and if this is the case, then it will be tried to prove that the list L0 begins with a noun phrase. Since *John* is an NP, this is successful. L1 is the sublist of L0 which remains after the analysis of L0, that is *scheduled the meeting for Wednesday*. This list is then combined with the auxiliary (First) and now it will be checked whether the resulting list *has scheduled the meeting for Wednesday* begins with a VP. This is the case and the remaining list L is empty. As a result, the sentence has been successfully processed.

The problem with this analysis is that exactly one word is checked in the lexicon. Sentences such as (54) can not be analyzed:<sup>33</sup>

<sup>31</sup>See Fordham & Crocker (1994) for a combination of a GB approach with statistical methods.

<sup>32</sup>Nozohoor-Farshi (1986, 1987) has shown that Marcus' parser can only parse context-free languages. Since natural languages are of a greater complexity (see Chapter 17) and grammars of corresponding complexity are allowed by current versions of Transformational Grammar, Marcus' parser can be neither an adequate implementation of the Chomskyan theory in question nor a piece of software for analyzing natural language in general.

<sup>33</sup>For a discussion that shows that the coordination of lexical elements has to be an option in linguistic theories, see Abeillé (2006).

#### 3 Transformational Grammar – Government & Binding

(54) Could or should we pool our capital with that of other co-ops to address the needs of a regional "neighborhood"?<sup>34</sup>

In this kind of sentence, two modal verbs have been coordinated. They then form an X<sup>0</sup> and – following GB analyses – can be moved together. If one wanted to treat these cases as Stabler does for the simplest case, then we would need to divide the list of words to be processed into two unlimited sub-lists and check whether the first list contains an auxiliary or several coordinated auxiliaries. We would require a recursive predicate aux\_verbs which somehow checks whether the sequence *could or should* is a well-formed sequence of auxiliaries. This should not be done by a special predicate but rather by syntactic rules responsible for the coordination of auxiliaries. The alternative to a rule such as (53a) would be the one in (55), which is the one that is used in theories like GPSG (Gazdar et al. 1985: 62), LFG (Falk 1984: 491), some HPSG analyses (Ginzburg & Sag 2000: 36), and Construction Grammar (Fillmore 1999):

(55) s → v(aux+), np, vp.

This rule would have no problems with coordination data like (54) as coordination of multiple auxiliaries would produce an object with the category v(aux+) (for more on coordination see Section 21.6.2). If inversion makes it necessary to stipulate a special rule like (53a), then it is not clear why one could not simply use the transformation-less rule in (55).

In the MITRE system (Zwicky et al. 1965), there was a special grammar for the surface structure, from which the deep structure was derived via reverse application of transformations, that is, instead of using one grammar to create deep structures which are then transformed into other structures, one required two grammars. The deep structures that were determined by the parser were used as input to a transformational component since this was the only way to ensure that the surface structures can actually be derived from the base structure (Kay 2011: 10).

The REQUEST system by Plath (1973) also used a surface grammar and inverse transformations to arrive at the deep structure, which was used for semantic interpretation.

There are other implementations discussed in this chapter that differ from transformation-based analyses. For example, Kolb & Thiersch (1991: 265, Section 4) arrive at the conclusion that a declarative, constraint-based approach to GB is more appropriate than a derivational one. Johnson (1989) suggests a *Parsing-as-Deduction* approach which reformulates sub-theories of GB (X theory, Theta-Theory, Case Theory, …) as logical expressions.<sup>35</sup> These can be used independently of each other in a logical proof. In Johnson's analysis, GB theory is understood as a constraint-based system. More general restrictions are extracted from the restrictions on S- and D-structure which can then be used directly for parsing. This means that transformations are not directly carried out by the parser. As noted by Johnson, the language fragment he models is very small. It contains no description of *wh*-movement, for example (p. 114).

<sup>34</sup>http://www.cooperativegrocer.coop/articles/index.php?id=595. 2010-03-28.

<sup>35</sup>See Crocker & Lewin (1992: 511) and Fordham & Crocker (1994: 38) for another constraint-based Parsingas-Deduction approach.

Lin (1993) implemented the parser PrinciParse. It is written in C++ and based on GB and Barriers – the theoretical stage after GB (see Chomsky 1986a). The system contains constraints like the Case Filter, the Theta-Criterion, Subjacency, the Empty Category Principle and so on. The Theta-Criterion is implemented with binary features +/-theta, there is no implementation of Logical Form (p. 119). The system organizes the grammar in a network that makes use of the object-oriented organization of C++ programs, that is, default-inheritance is used to represent constraints in super and subclasses (Lin 1993: Section 5). This concept of inheritance is alien to GB theory: it does not play any role in the main publications. The grammar networks license structures corresponding to X theory, but they code the possible relations directly in the network. The network contains categories like IP, Ibar, I, CP, Cbar, C, VP, Vbar, V, PP, PSpec, Pbar, P and so on. This corresponds to simple phrase structure grammars that fully specify the categories in the rules (see Section 2.2) rather than working with abstract schemata like the ones assumed in X theory (see Section 2.5). Furthermore Lin does not assume transformations but uses a GPSG-like feature passing approach to nonlocal dependencies (p. 116, see Section 5.4 on the GPSG approach).

Probably the most detailed implementation in the tradition of GB and Barriers is Stabler's Prolog implementation (1992). Stabler's achievement is certainly impressive, but his book confirms what has been claimed thus far: Stabler has to simply stipulate many things which are not explicitly mentioned in *Barriers* (e.g., using feature-value pairs when formalizing X theory, a practice that was borrowed from GPSG) and some assumptions cannot be properly formalized and are simply ignored (see Briscoe 1997 for details).

GB analyses which fulfill certain requirements can be reformulated so that they no longer make use of transformations. These transformation-less approaches are also called *representational*, whereas the transformation-based approaches are referred to as *derivational*. For representational analyses, there are only surface structures augmented by traces but none of these structures is connected to an underlying structure by means of transformations (see e.g., Koster 1978: 1987: 235; Kolb & Thiersch 1991; Haider 1993: Section 1.4; Frey 1993: 14; Lohnstein 1993: 87–88, 177–178; Fordham & Crocker 1994: 38; Veenstra 1998: 58). These analyses can be implemented in the same way as corresponding HPSG analyses (see Chapter 9) as computer-processable fragments and this has in fact been carried out for example for the analysis of verb position in German.<sup>36</sup> However, such implemented analyses differ from GB analyses with regard to their basic architecture and in small, but important details such as how one deals with the interaction of long-distance dependencies and coordination (Gazdar 1981b). For a critical discussion and classification of movement analyses in Transformational Grammar, see Borsley (2012).

Following this somewhat critical overview, I want to add a comment in order to avoid being misunderstood: I do not demand that all linguistic work shall be completely for-

<sup>36</sup>This shows that ten Hacken's contrasting of HPSG with GB and LFG (ten Hacken 2007: Section 4.3) and the classification of these frameworks as belonging to different research paradigms is completely mistaken. In his classification, ten Hacken refers mainly to the model-theoretic approach that HPSG assumes. However, LFG also has a model-theoretic formalization (Kaplan 1989). Furthermore, there is also a model-theoretic variant of GB (Rogers 1998). For further discussion, see Chapter 14.

malized. There is simply no space for this in a, say, thirty page essay. Furthermore, I do not believe that all linguists should carry out formal work and implement their analyses as computational models. However, there has to be *somebody* who works out the formal details and these basic theoretical assumptions should be accepted and adopted for a sufficient amount of time by the research community in question.

# **Comprehension questions**


## **Exercises**

	- (56) a. dass that der the.nom Delphin dolphin dem the.dat Kind child hilft helps 'that the dolphin helps the child'
		- b. dass that der the.nom Delphin dolphin den the.acc Hai shark attackiert attacks 'that the dolphin attacks the shark'
		- c. dass that der the.nom Hai shark attackiert attacked wird aux 'that the shark is attacked'
		- d. Der the.nom Hai shark wird aux attackiert. attacked 'The shark is attacked.'
		- e. Der the Delphin dolphin.nom hilft helps dem the.dat Kind. child 'The dolphin is helping the child.'

For the passive sentences, use the analysis where the subject noun phrase is moved from the object position, that is, the analysis without an empty expletive as the subject.

### **Further reading**

For Sections 3.1–3.5, I used material from Peter Gallmann from 2003 (Gallmann 2003). This has been modified, however, at various points. I am solely responsible for any mistakes or inadequacies. For current materials by Peter Gallmann, see http://www.syntax-theorie.de.

In the book *Syntaxtheorien: Analysen im Vergleich*, Lohnstein (2014) presents a variant of GB which more or less corresponds to what is discussed in this chapter (CP/IP, movement-based analysis of the passive). The chapters in said book have been written by proponents of various theories and all analyze the same newspaper article. This book is extremely interesting for all those who wish to compare the various theories out there.

Haegeman (1994) is a comprehensive introduction to GB. Those who do read German may consider the textbooks by Fanselow & Felix (1987), von Stechow & Sternefeld (1988) and Grewendorf (1988) since they are also addressing the phenomena that are covered in this book.

In many of his publications, Chomsky discusses alternative, transformationless approaches as "notational variants". This is not appropriate, as analyses without transformations can make different predictions to transformation-based approaches (e.g., with respect to coordination and extraction. See Section 5.5 for a discussion of GPSG in this respect). In Gazdar (1981a), one can find a comparison of GB and GPSG as well as a discussion of the classification of GPSG as a notational variant of Transformational Grammar with contributions from Noam Chomsky, Gerald Gazdar and Henry Thompson.

Borsley (1999b) and Kim & Sells (2008) have parallel textbooks for GB and HPSG in English. For the comparison of Transformational Grammar and LFG, see Bresnan & Kaplan (1982). Kuhn (2007) offers a comparison of modern derivational analyses with constraint-based LFG and HPSG approaches. Borsley (2012) contrasts analyses of long-distance dependencies in HPSG with movement-based analyses as in GB/Minimalism. Borsley discusses four types of data which are problematic for movement-based approaches: extraction without fillers, extraction with multiple gaps (see also the discussion of (57) on p. 171 and of (55) on p. 201 of this book), extractions where fillers and gaps do not match and extraction without gaps.

# **4 Transformational Grammar – Minimalism**

Like the Government & Binding framework that was introduced in the previous chapter, the Minimalist framework was initiated by Noam Chomsky at the MIT in Boston. Chomsky (1993, 1995b) argued that the problem of language evolution should be taken seriously and that the question of how linguistic knowledge could become part of our genetic endowment should be answered. To that end he suggested refocusing the theoretical developments towards models that have to make minimal assumptions regarding the machinery that is needed for linguistic analyses and hence towards models that assume less language specific innate knowledge.

Like GB, Minimalism is wide-spread: theoreticians all over the world are working in this framework, so the following list of researchers and institutions is necessarily incomplete. *Linguistic Inquiry* and *Syntax* are journals that almost exclusively publish Minimalist work and the reader is referred to these journals to get an idea about who is active in this framework. The most prominent researchers in Germany are Artemis Alexiadou, Humboldt University Berlin; Günther Grewendorf (2002), Frankfurt am Main; Joseph Bayer, Konstanz; and Gereon Müller, Leipzig.

While innovations like X theory and the analysis of clause structure in GB are highly influential and can be found in most of the other theories that are discussed in this book, this is less so for the technical work done in the Minimalist framework. It is nevertheless useful to familiarize with the technicalities since Minimalism is a framework in which a lot of work is done and understanding the basic machinery makes it possible to read empirically interesting work in that framework.

While the GB literature of the 1980s and 1990s shared a lot of assumptions, there was an explosion of various approaches in the Minimalist framework that is difficult to keep track of. The presentation that follows is based on David Adger's textbook (Adger 2003).

# **4.1 General remarks on the representational format**

The theories that are developed in the framework of the Minimalist Program build on the work done in the GB framework. So a lot of things that were explained in the previous chapter can be taken over to this chapter. However, there have been some changes in fundamental assumptions. The general parametrized principles were dropped from the theory and instead the relevant distinctions live in features. Languages differ in the values that certain features may have and in addition to this, features may be strong or weak and feature strength is also a property that may vary from language to language.

Strong features make syntactic objects move to higher positions. The reader is familiar with this feature-driven movement already since it was a component of the movementbased analysis of the passive in Section 3.4. In the GB analysis of passive, the object had to move to the specifier position of IP in order to receive case. Such movements that are due to missing feature values are a key component in Minimalist proposals.

### **4.1.1 Basic architecture**

Chomsky assumes that there are just two operations (rules) for combining linguistic objects: External and Internal Merge. External Merge simply combines two elements like *the* and *book* and results in a complex phrase. Internal Merge is used to account for movement of constituents. It applies to one linguistic object and takes some part of this linguistic object and adjoins it to the left of the respective object. The application of External Merge and Internal Merge can apply in any order. For instance, two objects can be combined with External Merge and then one of the combined items is moved to the left by applying Internal Merge. The resulting object can be externally merged with another object and so on. As an example consider the Determiner Phrase (DP) in (1):<sup>1</sup>

(1) the man who we know

To derive this DP the verb *know* is externally merged with its object *who*. After several intermediate merges that will be discussed below, *know who* will be merged with *we* and finally the *who* is moved to the left by Internal Merge, resulting in *who we know*. This relative clause can be externally merged with *man* and so on.

So, Minimalist theories differ from GB in not assuming a deep structure that is generated by some X grammar and a surface structure that is derived from the deep structure by Move-. Instead, it is assumed that there is a phase in which External and Internal Merge (combination and movement) apply in any order to derive a certain structure that is then said to be spelled out. It is said that the structure is sent to the interfaces: the articulatory-perceptual system (AP) on the one hand and the conceptual-intentional system (CI) on the other hand. AP corresponds to the level of Phonological Form (PF) and CI to the level of Logical Form (LF) in GB. The new architecture is depicted in Figure 4.1 on the facing page (left figure). Syntax is assumed to operate on so-called numerations, selections of lexical items that are relevant for a derivation.<sup>2</sup> *Overt syntax* stands for syntactic operations that usually have a visible effect. After overt syntax the syntactic object is sent off to the interfaces and some transformations may take place after this Spell-Out point. Since such transformations do not affect pronunciation, this part of

<sup>1</sup>Most researchers working in Minimalism follow Abney (1987) in assuming that the determiner rather than the noun is the head of nominal structures. Hence sequence like (1) are determiner phrases rather than noun phrases.

<sup>2</sup> It is unclear to me how numerations are determined. Since empty elements play a crucial role in the analysis of sentences in Minimalism and since it is not known which empty elements are needed in an actual analysis until the analysis is carried out, there will be infinitely many numerations that potentially could be used in the analysis of a given string. Which numeration is chosen and how numerations could be integrated in psycholinguistically plausible models of human sentence comprehension is unclear to me. Numerations will be ignored in what follows and it will be assumed that lexical items come from the lexicon directly. See Hornstein et al. (2005: Section 2.3.2.6) for more on numerations.

Figure 4.1: Architecture assumed in Minimalist theories before the Phase model (left) and in the Phase model (right) according to Richards (2015: 812, 830)

syntax is called *covert syntax*. Like in GB's LF, the covert syntax can be used to derive certain scope readings.

This architecture was later modified to allow Spell-Out at several points in the derivation (right figure). It is now assumed that there are *phases* in a derivation and that a completed phase is spelled out once it is used in a combination with a head (Chomsky 2008). For instance, a subordinated sentence like *that Peter comes* in (2) is one phase and is sent to the interfaces before the whole sentence is completed.<sup>3</sup>

(2) He believes that Peter comes.

There are different proposals as to what categories form complete phases. Since the concept of phases is not important for the following introduction, I will ignore this concept in the following. See Section 15.1 on the psycholinguistic plausibility of phases in particular and the Minimalist architecture in general.

<sup>3</sup>Andreas Pankau (p. c. 2015) pointed out to me that there is a fundamental problem with such a conception of phases, since if it is the case that only elements that are in a relation to a head are sent off to the interface then the topmost phrase in a derivation would never be sent to the interfaces, since it does not depend on any head.

### **4.1.2 Valence, feature checking, and agreement**

The basic mechanism in Minimalist theories is feature checking. For instance, the noun *letters* may have a P feature, which means that it has to combine with a PP in order to form a complete phrase.

(3) letters to him

It is assumed that there are interpretable and uninterpretable features. An example of an interpretable feature is the number feature of nouns. The singular/plural distinction is semantically relevant. The category features for part of speech information are purely syntactic and hence cannot be interpreted semantically. Minimalism assumes that all uninterpretable features have to be used up during the derivation of a complex linguistic object. This process of eating up the features is called *checking*. As an example, let us consider the noun *letters* again. The analysis of (3) is depicted in Figure 4.2. The fact

Figure 4.2: Valence representation via uninterpretable features

that the P feature of *letters* is uninterpretable is represented by the little *u* in front of the P. The uninterpretable P feature of *letters* can be checked against the P feature of *to him*. *him* is assumed to be of category D since it can appear in the same places as full nominal phrases, which are assumed to be DPs here. All checked features are said to delete automatically. The deletion is marked by striking the features out in the figures. Strings like (4) are ruled out as complete derivations since the D feature of P is not checked. This situation is shown in Figure 4.3.

(4) \* letters to

Figure 4.3: Illegitimate syntactic object due to an uninterpretable feature

If this structure would be used in a larger structure that is spelled out, the derivation would *crash* since the conceptual system could not make sense of the D feature that is still present at the P node.

Selectional features are atomic, that is, the preposition cannot select an DP[*acc*] as in GB and the other theories in this book unless DP[*acc*] is assumed to be atomic. Therefore, an additional mechanism is assumed that can check other features in addition to selectional features. This mechanism is called *Agree*.

	- b. letters to him

The analysis of (5b) is shown in Figure 4.4. There is an interesting difference between the

Figure 4.4: Feature checking via Agree

checking of selectional features and the checking of features via Agree. The features that are checked via Agree do not have to be at the top node of the object that is combined with a head. This will play a role later in the analysis of the passive and local reordering.

## **4.1.3 Phrase structure and X theory**

The projections of X structures were given in Figure 2.9 on page 76. According to early versions of the X theory, there could be arbitrarily many complements that were combined with X<sup>0</sup> to form an X. Arbitrarily many adjuncts could attach to X and then at most one specifier could be combined with the X yielding an XP. Minimalist theories assume binary branching and hence there is at most one complement, which is the firstmerged item. Furthermore, it is not assumed that there is a unique specifier position. Chomsky rather assumes that all items that are not complements are specifiers. That is, he distinguishes between first-merged (complements) and later-merged items (specifiers). Figure 4.5 on the following page shows an example with two specifiers. It is also possible to have just a complement and no specifier or to have one or three specifiers. What structures are ultimately licensed depends on the features of the items that are involved in the Merge operations. Whether a phrasal projection counts as an X or an XP depends on whether the phrase is used as a complement or specifier of another head or whether it is used as head in further Merge operations. If a phrase is used as specifier or complement its status is fixed to be a phrase (XP), otherwise the projectional status of resulting phrases is left underspecified. Lexical head daughters in Merge operations have the category X and complex head daughters in Merge operations have the category X. This solves the problem that standard X theoretic approaches had with pronouns and

Figure 4.5: Complements and specifiers in Minimalist theories

proper names: a lot of unary branching structure had to be assumed (See left picture in Figure 2.9 on page 76). This is not necessary any longer in current Minimalist theories.<sup>4</sup>

## **4.1.4 Little** *v*

In Section 3.4, I used X structures in which a ditransitive verb was combined with its accusative object to form a V, which was then combined with the dative object to form a further V. Such binary branching structures and also flat structures in which both objects are combined with the verb to form a V are rejected by many practitioners of GB and Minimalism since the branching does not correspond to branchings that would be desired for phenomena like the binding of reflexives and negative polarity items (NPIs). For example, a binding in which *Benjamin* binds *himself* in (6a) is impossible:

	- b. Peter showed himself Benjamin in the mirror.

Since there is no possible binding for *himself* in (6a), the sentence is ungrammatical. (6b) is fine, but *himself* has to refer Peter, it cannot refer to Benjamin.

What is required for the analysis of Binding and NPI phenomena in theories that analyze these phenomena in terms of tree configurations is that the reflexive pronoun in (6) is "higher" in the tree than the proper name *Benjamin*. More precisely, the reflexive pronoun *himself* has to c-command *Benjamin*. c-command is defined as follows (Adger 2003: 117):<sup>5</sup>

(7) A node A c-commands B if, and only if A's sister either: a. is B, or b. contains B

<sup>4</sup> For problems with this approach see Brosziewski (2003: Section 2.1).

<sup>5</sup> c-command also plays a prominent role in GB. In fact, one part of Government & Binding is the Binding Theory, which was not discussed in the previous chapter since binding phenomena do not play a role in this book.

In the trees to the left and in the middle of Figure 4.6 the c-command relations are not as desired: in the left-most tree both DPs c-command each other and in the middle one *Benjamin* c-commands *himself* rather than the other way round. Hence it is assumed that the

Figure 4.6: Three possible analyses of ditransitives

structures at the left and in the middle are inappropriate and that there is some additional structure involving the category *v*, which is called *little v* (Adger 2003: Section 4.4). The sister of *himself* is V and V contains *Benjamin*, hence *himself* c-commands *Benjamin*. Since the sister of *Benjamin* is V and V neither is nor contains *himself*, *Benjamin* does not c-command *himself*. *Peter* in (6b) is the specifier of *v* and hence c-commands and binds *himself*. As expected.

The analysis of ditransitives involving an additional verbal head goes back to Larson (1988). Hale & Keyser (1993: 70) assume that this verbal head contributes a causative semantics. The structure in Figure 4.7 is derived by assuming that the verb *show* starts out in the V position and then moves to the *v* position. *show* is assumed to mean *see* and in the position of little *v* it picks up the causative meaning, which results in a *cause-see*′ meaning (Adger 2003: 133).

Figure 4.7: Analysis of ditransitives involving movement to little *v*

While the verb shell analysis with an empty verbal head was originally invented by Larson (1988) for the analysis of ditransitive verbs, it is now also used for the analysis of strictly transitive and even intransitive verbs.

Adger (2003: Section 4.5) argues that semantic roles are assigned uniformly in certain tree configurations:

	- b. DP daughter of VP → interpreted as theme
	- c. PP daughter of *v* → interpreted as goal

Adger assumes that such uniformly assigned semantic roles help in the process of language acquisition and from this, it follows that little *v* should also play a role in the analysis of examples with strictly transitive and intransitive verbs. The Figures 4.8 and 4.9 show the analysis of sentences containing the verbs *burn* and *laugh*, respectively.<sup>6</sup>

Figure 4.8: Analysis of strictly transitives involving little *v*

Figure 4.9: Analysis of intransitives involving little *v*

Adger (2003: 164) assumes that intransitive and transitive verbs move from V to little *v* as well. This will be reflected in the following figures.

<sup>6</sup> If all intransitive verbs of this type are supposed to have agents as subjects, a very broad conception of agent has to be assumed that also subsumes the subject of verbs like *sleep*. Usually sleeping is not an activity that is performed intentionally.

## **4.1.5 CP, TP,** *v***P, VP**

Section 3.1.5 dealt with the CP/IP system in GB. In the course of the development of Minimalism, the Inflectional Phrase was split into several functional projections (Chomsky 1989) of which only the Tense Phrase is assumed in current Minimalist analyses. So, the TP of Minimalism corresponds to IP in the GB analysis. Apart from this change, the core ideas of the CP/IP analysis have been transferred to the Minimalist analysis of English. This subsection will first discuss special features that are assumed to trigger movement (Subsection 4.1.5.1) and then case assignment (Subsection 4.1.5.2).

### **4.1.5.1 Features as triggers for movement: The EPP feature on T**

In GB approaches, the modals and auxiliaries were analyzed as members of the category I and the subjects as specifiers of IP. In the previous section, I showed how subjects are analyzed as specifiers of *v*P. Now, if one assumes that a modal verb combines with such a *v*P, the subject follows the modal, which does not correspond to the order that is observable in English. This problem is solved by assuming a strong uninterpretable D feature at T. Since the feature is strong, a suitable D has to move to the specifier of T and check the D locally. Figure 4.10 shows the TP that plays a role in the analysis of (9):

(9) Anna will read the book.

Figure 4.10: Analysis of *Anna will read the book.* involving a modal and movement of the subject from *v* to T

The Determiner Phrase (DP) *the book* is the object of *read* and checks the D feature of *read*. Little *v* selects for the subject *Anna*. Since T has a strong D feature (marked by an

#### 4 Transformational Grammar – Minimalism

asterisk '\*'), *Anna* must not remain inside of the *v*P but moves on to the specifier position of TP. The strong feature is also called EPP feature for historic reasons: Chomsky (1982: 10) stipulated a principle called Extended Projection Principle (EPP), which says that every clause has to have a subject. The effect of this principle can be reached in Minimalism by assuming EPP features on T nodes, since such features require a DP to be moved to SpecTP and hence make sure that a subject is realized there.

Full sentences are CPs. For the analysis of (9), an empty C head is assumed that is combined with the TP. The empty C contributes a clause type feature Decl. The full analysis of (9) is shown in Figure 4.11.

Figure 4.11: Analysis of *Anna will read the book.* as CP with an empty C with the clausetype feature Decl

The analysis of the question in (10) involves an unvalued clause-type feature on T for the sentence type *question*.

(10) What will Anna read?

The empty complementizer C has a Q feature that can value the clause-type feature on T. Since clause-type features on T that have the value Q are stipulated to be strong, the T element has to move to C to check the feature locally. In addition, the *wh* element is moved. This movement is enforced by a strong *wh* feature on C. The analysis of (10) is given in Figure 4.12 on the next page.

Figure 4.12: Analysis of *What will Anna read?* with an empty C with a strong *wh* feature

### **4.1.5.2 Case assignment**

In the GB analysis that was presented in Chapter 3, nominative was assigned by (finite) I and the other cases by the verb (see Section 3.4.2). The assignment of nominative is taken over to Minimalist analyses, so it is assumed that nominative is assigned by (finite) T. However, in the Minimalist theory under consideration, there is not a single verb projection, but there are two verbal projections: *v*P and VP. Now, one could assume that V assigns accusative to its complement or that *v* assigns accusative to the complement of the verb it dominates. Adger (2003: Section 6.3.2, Section 6.4) assumes the latter approach, since it is compatible with the analysis of so-called unaccusative verbs and the passive. Figure 4.13 on the following page shows the TP for (11):

(11) Anna reads the book.

The two DPs *Anna* and *the book* start out with unvalued uninterpretable case features: [*u*case:]. The features get valued by T and *v*. It is assumed that only one feature is checked by Merge, so this would be the D feature on T, leaving the case feature for the other available checking mechanism: Agree. Agree can be used to check features in sister nodes, but also features further away in the tree. The places that are possible candidates for Agree relations have to stand in a certain relation to each other. The first node has to c-command the node it Agrees with. c-command roughly means: one node up and then arbitrarily many nodes down. So *v* c-commands VP, V, the DP *the book*, and all the nodes

Figure 4.13: Case assignment by T and *v* in the TP for *Anna reads the book.*

within this DP. Since Agree can value features of c-commanded nodes, the accusative on *v* can value the case feature of the DP *the book*.

The non-locality that is built into Agree raises a problem: why is it that (12) is ungrammatical?

(12) \* Him likes she.

The accusative of *v* could be checked with its subject and the nominative of T with the object of *likes*. Both DPs stand in the necessary c-command relations to T and *v*. This problem is solved by requiring that all Agree relations have to involve the closest possible element. Adger (2003: 218) formulates this constraint as follows:

(13) Locality of matching: Agree holds between a feature F on X and a matching feature F on Y if and only if there is no intervening Z[F].

Intervention is defined as in (14):

(14) Intervention: In a structure [X … Z … Y], Z intervenes between X and Y iff X c-commands Z and Z c-commands Y.

So, since T may Agree with *Anna* in Figure 4.13, it must not Agree with *the book*. Hence nominative assignment to *she* in (12) is impossible and (12) is correctly ruled out.

### **4.1.6 Adjuncts**

Adger (2003: Section 4.2.3) assumes that adjuncts attach to XP and form a new XP. He calls this operation *Adjoin*. Since this operation does not consume any features it is different from External Merge and hence a new operation would be introduced into the

theory, contradicting Chomsky's claim that human languages use only Merge as a structure building operation. There are proposals to treat adjuncts as elements in special adverbial phrases with empty heads (see Section 4.6.1) that are also assumed to be part of a hierarchy of functional projections. Personally, I prefer Adger's solution that corresponds to what is done in many other frameworks: there is a special rule or operation for the combination of adjuncts and heads (see for instance Section 9.1.7 on the HPSG schema for head adjunct combinations).

# **4.2 Verb position**

The analysis of verb first sentences in German is straightforward, given the machinery that was introduced in the previous section. The basic idea is the same as in GB: the finite verb moves from V to *v* to T and then to C. The movement to T is forced by a strong tense feature on T and the movement of the T complex to C is enforced by a clause-type feature on T that is valued as a strong Interrogative feature (Int) or a strong Declarative feature (Decl) by C. The analysis of the interrogative clause in (15) is shown in Figure 4.14 on the next page.

(15) Kennt knows jeder everybody diesen this Roman? novel 'Does everybody know this novel?'

# **4.3 Long-distance dependencies**

Having explained the placement of the verb in initial position, the analysis of V2 sentences does not come with a surprise: Adger (2003: 331) assumes a feature that triggers the movement of a constituent to a specifier position of C. Adger calls this feature top, but this is a misnomer since the initial position in German declarative sentences is not restricted to topics (maybe he would assume an alternative C head with a foc feature). Figure 4.15 on page 141 shows the analysis of (16):

(16) Diesen this Roman novel kennt knows jeder. everybody 'Everbody knows this novel.'

As in the verb-initial clause in Figure 4.14 a feature on C triggers verb movement. This time it is a Decl feature since we are dealing with a declarative clause. The top feature triggers movement of *diesen Roman* 'this novel' to the specifier position of C.

# **4.4 Passive**

Adger (2003) suggests an analysis for the passive in English, which I adapted here to German. Like in the GB analysis that was discussed in Section 3.4 it is assumed that

Figure 4.14: Analysis of *Kennt jeder diesen Roman?* 'Does everybody know this novel?' following the analysis of Adger (2003)

the verb does not assign accusative to the object of *schlagen* 'to beat'. In Minimalist terms, this means that little *v* does not have an acc feature that has to be checked. This special version of little *v* is assumed to play a role in the analysis of sentences of so-called unaccusative verbs (Perlmutter 1978). Unaccusative verbs are a subclass of intransitive verbs that have many interesting properties. For instance, they can be used as adjectival participles although this is usually not possible with intransitive verbs:

	- b. der the gestorbene died Mann man 'the dead man'

The explanation of this difference is that adjectival participles predicate over what is the object in active sentences:

	- b. das the gelesene read Buch book

Figure 4.15: Analysis of *Diesen Roman kennt jeder.* 'Everybody knows this novel.' following the analysis of Adger (2003: 331)

Now the assumption is that the argument of *gestorben* 'died' behaves like an object, while the argument of *getanzt* 'danced' behaves like a subject. If adjectival passives predicate over the object it is explained why (17b) is possible, while (17a) is not.

Adger (2003: 140) assumes the structure in Figure 4.16 for *v*Ps with unaccusative verbs. It is assumed that this unaccusative variant of little *v* plays a role in the analysis of the

Figure 4.16: Structure of *v*P with unaccusative verbs like *fall*, *collapse*, *wilt* according to Adger (2003: 140)

passive. Unaccusative verbs are similar to passivized verbs in that they do have a subject that somehow also has object properties. The special version of little *v* is selected by the Passive head *werden* 'aux', which forms a Passive Phrase (abbreviated as PassP). See Figure 4.17 for the analysis of the example in (19):

(19) dass that er he geschlagen beaten wurde aux 'that he was beaten'

The Pass head requires the Infl feature of little *v* to have the value Pass, which results in participle morphology at spellout. Hence the form that is used is *geschlagen* 'beaten'. The auxiliary moves to T to check the strong Infl feature at T and since the Infl feature is past, the past form of *werden* 'aux', namely *wurde* 'aux', is used at spellout. T has a nom feature that has to be checked. Interestingly, the Minimalist approach does not require the object of*schlagen* to move to the specifier position of T in order to assign case, since case assignment is done via Agree. Hence in principle, the pronominal argument of *schlagen* could stay in its object position and nevertheless get nominative from T. This would solve the problem of the GB analysis that was pointed out by Lenerz (1977: Section 4.4.3). See page 112 for Lenerz' examples and discussion of the problem. However, Adger (2003: 332) assumes that German has a strong EPP feature on T. If this assumption is upheld, all problems of the GB account will carry over to the Minimalist analysis: all objects have to move to T even when there is no reordering taking place. Furthermore, impersonal passives of the kind in (20) would be problematic, since there is no noun phrase that could be moved to T in order to check the EPP feature:

(20) weil because getanzt danced wurde aux 'because there was dancing there'

# **4.5 Local reordering**

Adger (2003) does not treat local reordering. But there are several other suggestions in the literature. Since all reorderings in Minimalist theories are feature-driven, there must be an item that has a feature that triggers reorderings like those in (21b):

	- b. [weil] because diesen this Roman novel jeder everyone kennt knows

There have been various suggestions involving functional projections like Topic Phrase (Laenzlinger 2004: 222) or AgrS and AgrO (Meinunger 2000: Chapter 4) that offer places to move to. G. Müller (2014a: Section 3.5) offers a leaner solution, though. In his approach, the object simply moves to a second specifier position of little *v*. The analysis is depicted in Figure 4.18.<sup>7</sup>

Figure 4.18: Analysis of *dass diesen Roman jeder kennt* 'that everybody knows this novel' as movement of the object to a specifier position of *v*

An option that was suggested by Laenzlinger (2004: 229–230) is to assume several Object Phrases for objects that may appear in any order. The objects move to the specifier positions of these projections and since the order of the Object Phrases is not restricted, both orders in (22) can be analyzed:

(22) a. dass that Hans Hans diesen this Brief letter meinem my Onkel uncle gibt gives 'that Hans gives this letter to my uncle'

<sup>7</sup>G. Müller assumes optional features on *v* and V that trigger local reorderings (p. 48). These are not given in the figure.

b. dass that Hans Hans meinem my Onkel uncle diesen this Brief letter gibt gives 'that Hans gives my uncle this letter'

# **4.6 New developments and theoretical variants**

This section and the following one are for advanced readers. This section introduces some variants of Minimalism and can easily be skipped without running into problems with the remaining theory chapters. The next section compares Minimalism with theories introduced later in the book. So I suggest coming back here after reading Chapters 5–12.

At the start of the 90s, Chomsky suggested a major rethink of the basic theoretical assumptions of GB and only keeping those parts of the theory that are absolutely necessary. In the *Minimalist Program*, Chomsky gives the central motivations for the farreaching revisions of GB theory (Chomsky 1993, 1995b). Until the beginning of the 90s, it was assumed that Case Theory, the Theta-Criterion, X theory, Subjacency, Binding Theory, Control Theory etc. all belonged to the innate faculty for language (Richards 2015: 804). This, of course, begs the question of how this very specific linguistic knowledge made its way into our genome. The Minimalist Program follows up on this point and attempts to explain properties of language through more general cognitive principles and to reduce the amount of innate language-specific knowledge postulated. The distinction between deep structure and surface structure, for example, was abandoned. Move still exists as an operation, but can be used directly to build sub-structures rather than after a complete D-structure has been created. Languages differ with regard to whether this movement is visible or not.

Although Chomsky's Minimalist Program should be viewed as a successor to GB, advocates of Minimalism often emphasize the fact that Minimalism is not a theory as such, but rather a research program (Chomsky 2007: 4; 2013: 6). The actual analyses suggested by Chomsky (1995b) when introducing the research program have been reviewed by theoreticians and have sometimes come in for serious criticism (Kolb 1997, Johnson & Lappin 1997, 1999, Lappin, Levine & Johnson 2000a,b, 2001, Seuren 2004, Pinker & Jackendoff 2005), however, one should say that some criticisms overshoot the mark.

There are various strains of Minimalism. In the following sections, I will discuss some of the central ideas and explain which aspects are regarded problematic.

# **4.6.1 Move, Merge, feature-driven movement and functional projections**

Johnson, Lappin and Kolb have criticized the computational aspects of Chomsky's system. Chomsky suggested incorporating principles of economy into the theory. In certain cases, the grammatical system can create an arbitrary number of structures, but only the most economical, that is, the one which requires the least effort to produce, will be accepted as grammatical (transderivational economy). This assumption does not necessarily have to be taken too seriously and, in reality, does not play a role in many works

in the Minimalist framework (although see Richards (2015) for recent approaches with derivations which are compared in terms of economy). Nevertheless, there are other aspects of Chomsky's theory which can be found in many recent works. For example, Chomsky has proposed reducing the number of basic, structure building operations that license structures to two: Move and Merge (that is, Internal and External Merge). Move corresponds to the operation Move-, which was already discussed in Chapter 3, and Merge is the combination of (two) linguistic objects.

It is generally assumed that exactly two objects can be combined (Chomsky 1995b: 226). For Move, it is assumed that there must be a reason for a given movement operation. The reason for movement is assumed to be that an element can check some feature in the position it is moved to. This idea was already presented in the analysis of the passive in Section 3.4: the accusative object does not bear case in passive sentences and therefore has to be moved to a position where it can receive case. This kind of approach is also used in newer analyses for a range of other phenomena. For example, it is assumed that there are phrases whose heads have the categories focus and topic. The corresponding functional heads are always empty in languages like German and English. Nevertheless, the assumption of these heads is motivated by the fact that other languages possess markers which signal the topic or focus of a sentence morphologically. This argumentation is only possible if one also assumes that the inventory of categories is the same for all languages. Then, the existence of a category in one language would suggest the existence of the same category in all other languages. This assumption of a shared universal component (Universal Grammar, UG) with detailed language-specific knowledge is, however, controversial and is shared by few linguists outside of the Chomskyan tradition. Even for those working in Chomskyan linguistics, there have been questions raised about whether it is permissible to argue in this way since if it is only the ability to create recursive structures that is responsible for the human-specific ability to use language (faculty of language in the narrow sense) – as Hauser, Chomsky & Fitch (2002) assume –, then the individual syntactic categories are not part of UG and data from other languages cannot be used to motivate the assumption of invisible categories in another language.

#### **4.6.1.1 Functional projections and modularization of linguistic knowledge**

The assumption that movement must be licensed by feature checking has led to an inflation of the number of (silent) functional heads.<sup>8</sup> Rizzi (1997: 297) suggests the structure in Figure 4.19 on the next page (see also Grewendorf 2002: 85, 240; 2009).

The functional categories Force, Top, Foc and Fin correspond to clause type, topic, focus and finiteness. It is assumed that movement always targets a specifier position.

<sup>8</sup> The assumption of such heads is not necessary since features can be "bundled" and then they can be checked together. For an approach in this vein, which is in essence similar to what theories such as HPSG assume, see Sternefeld (2006: Section II.3.3.4, Section II.4.2).

In so-called cartographic approaches, it is assumed that every morphosyntactic feature corresponds to an independent syntactic head (Cinque & Rizzi 2010: 54, 61). For an explicitly formalized proposal in which exactly one feature is consumed during a combination operation see Stabler (2001: 335). Stabler's *Minimalist Grammars* are discussed in more detail in Section 4.6.4.

Figure 4.19: Syntactic structure of sentences following Rizzi (1997: 297)

Topics and focused elements are always moved to the specifier position of the corresponding phrase. Topics can precede or follow focused elements, which is why there are two topic projections: one above and one below FocP. Topic phrases are recursive, that is, an arbitrary number of TopPs can appear at the positions of TopP in the figure. Following Grewendorf (2002: 70), topic and focus phrases are only realized if they are required for particular information structural reasons, such as movement.<sup>9</sup> Chomsky (1995b: 147) follows Pollock (1989) in assuming that all languages have functional projections for

<sup>9</sup> There are differing opinions as to whether functional projections are optional or not. Some authors assume that the complete hierarchy of functional projections is always present but functional heads can remain empty (e.g., Cinque 1999: 106 and Cinque & Rizzi 2010: 55).


Table 4.1: Functional heads following Cinque (1999: 106)

subject and object agreement as well as negation (AgrS, AgrO, Neg).<sup>10</sup> Sternefeld (1995: 78), von Stechow (1996: 103) and Meinunger (2000: 100–101, 124) differentiate between two agreement positions for direct and indirect objects (AgrO, AgrIO). As well as AgrS, AgrO and Neg, Beghelli & Stowell (1997) assume the functional heads Share and Dist in order to explain scope phenomena in English as feature-driven movements at LF. For a treatment of scope phenomena without empty elements or movement, see Section 19.3. Błaszczak & Gärtner (2005: 13) assume the categories −PolP, +PolP and %PolP for their discussion of polarity.

Webelhuth (1995: 76) gives an overview of the functional projections that had been proposed up to 1995 and offers references for AgrA, AgrN, AgrV, Aux, Clitic Voices, Gender, Honorific, , Number, Person, Predicate, Tense, Z.

In addition to AdvP, NegP, AgrP, FinP, TopP and ForceP, Wiklund, Hrafnbjargarson, Bentzen & Hróarsdóttir (2007) postulate an OuterTopP. Poletto (2000: 31) suggests both a HearerP and a SpeakerP for the position of clitics in Italian. Bosse & Bruening (2011: 75) assume a BenefactiveP and Speyer (2008: 470) a SceneP.

Cinque (1999: 106) adopts the 32 functional heads in Table 4.1 in his work. He assumes that all sentences contain a structure with all these functional heads. The specifier positions of these heads can be occupied by adverbs or remain empty. Cinque claims that these functional heads and the corresponding structures form part of Universal Grammar, that is, knowledge of these structures is innate (page 107).<sup>11</sup> Laenzlinger (2004) follows Cinque in proposing this sequence of functional heads for German. He also follows Kayne (1994), who assumes that all syntactic structures have the order specifier–head– complement cross-linguistically, even if the surface order of the constituents seems to contradict this.

<sup>10</sup>See Chomsky (1995b: Section 4.10.1), however.

<sup>11</sup>Table 4.1 shows only the functional heads in the clausal domain. Cinque (1994: 96, 99) also accounts for the order of adjectives with a cascade of projections: Quality, Size, Shape, Color, Nationality. These categories and their ordering are also assumed to belong to UG (p. 100).

Cinque (1994: 96) claims that a maximum of seven attributive adjectives are possible and explains this with the fact that there are a limited number of functional projections in the nominal domain. As was shown on page 65, with a fitting context it is possible to use several adjectives of the same kind, which is why some of Cinque's functional projections would have to be subject to iteration.

#### 4 Transformational Grammar – Minimalism

The constituent orders that are visible in the end are derived by leftward-movement.<sup>12</sup> Figure 4.20 on the next page shows the analysis of a verb-final clause where the functional adverbial heads have been omitted.<sup>13</sup> Subjects and objects are generated as arguments inside of vP and VP, respectively. The subject is moved to the specifier of the subject phrase and the object is moved to the specifier of the object phrase. The verbal projection (VP ) is moved in front of the auxiliary into the specifier position of the phrase containing the auxiliary. The only function of SubjP and ObjP is to provide a landing site for the respective movements. For a sentence in which the object precedes the subject, Laenzlinger assumes that the object moves to the specifier of a topic phrase. Figure 4.20 contains only a ModP and an AspP, although Laenzlinger assumes that all the heads proposed by Cinque are present in the structure of all German clauses. For ditransitive verbs, Laenzlinger assumes multiple object phrases (page 230). A similar analysis with movement of object and subject from verb-initial VPs to Agr positions was suggested by Zwart (1994) for Dutch.

For general criticism of Kayne's model, see Haider (2000). Haider shows that a Kaynelike theory makes incorrect predictions for German (for instance regarding the position of selected adverbials and secondary predicates and regarding verbal complex formation) and therefore fails to live up to its billing as a theory which can explain all languages. Haider (1997a: Section 4) has shown that the assumption of an empty Neg head, as assumed by Pollock (1989), Haegeman (1995) and others, leads to problems. See Bobaljik (1999) for problems with the argumentation for Cinque's cascade of adverb-projections.

Furthermore, it has to be pointed out that SubjP and ObjP, TraP (Transitive Phrase) and IntraP (Intransitive Phrase) (Karimi-Doostan 2005: 1745) and TopP (topic phrase), DistP (quantifier phrase), AspP (aspect phrase) (É. Kiss 2003: 22; Karimi 2005: 35), PathP and PlaceP (Svenonius 2004: 246) encode information about grammatical function, valence, information structure and semantics in the category symbols.<sup>14</sup> In a sense, this is a

	- b. Someone just walked into the room [DP \_ who we don't know].

(ii) Someone who we don't know just walked into the room

<sup>12</sup>This also counts for extraposition, that is, the movement of constituents into the postfield in German. Whereas this would normally be analyzed as rightward-movement, Kayne (1994: Chapter 9) analyzes it as movement of everything else to the left. Kayne assumes that (i.b) is derived from (i.a) by moving part of the DP:

<sup>(</sup>i.a) must have to be some kind of derived intermediate representation, otherwise English would not be SV(O) underlyingly but rather V(O)S. (i.a) is therefore derived from (ii) by fronting the VP *just walked into the room*.

Such analyses have the downside that they cannot be easily combined with performance models (see Chapter 15).

<sup>13</sup>These structures do not correspond to X theory as it was presented in Section 2.5. In some cases, heads have been combined with complements to form an XP rather than an X′ . For more on X theory in the Minimalist Program, see Section 4.6.3.

<sup>14</sup>For further examples and references, see Newmeyer (2004a: 194; 2005: 82). Newmeyer references also works which stipulate a projection for each semantic role, e.g., Agent, Reciprocal, Benefactive, Instrumental, Causative, Comitative, and Reversive Phrase.

Figure 4.20: Analysis of sentence structure with leftward remnant movement and functional heads following Laenzlinger (2004: 224)

#### 4 Transformational Grammar – Minimalism

misuse of category symbols, but such a misuse of information structural and semantic categories is necessary since syntax, semantics, and information structure are tightly connected and since it is assumed that the semantics interprets the syntax, that is, it is assumed that semantics comes after syntax (see Figure 3.2 and Figure 4.1). By using semantically and pragmatically relevant categories in syntax, there is no longer a clean distinction between the levels of morphology, syntax, semantics and pragmatics: everything has been 'syntactified'. Rizzi (2014) himself talks about syntactification. He points out that there are fundamental problems with the T-model and its current variants in Minimalism and concludes that a syntactification in terms of Rizzi-style functional heads or a proliferation of T heads with respective features (see also Borsley 2006, Borsley & Müller 2021) is the only way to save this architecture. Felix Bildhauer (p. c. 2012) has pointed out to me that approaches which assume a cascade of functional projections where the individual aspects of meaning are represented by nodes are actually very close to phrasal approaches in Construction Grammar (see Adger 2013: 470 also for a similar view). One simply lists configurations and these are assigned a meaning (or features which are interpreted post-syntactically, see Cinque & Rizzi (2010: 62) for the interpretation of TopP, for example).

### **4.6.1.2 Feature checking in specifier positions**

If one takes the theory of feature checking in Specifier-Head relations to its logical conclusion, then one arrives at an analysis such as the one suggested by Radford (1997: 452). Radford assumes that prepositions are embedded in an Agreement Phrase in addition to the structure in (23), which is usually assumed, and that the preposition adjoins to the head of the Agreement Phrase and the argument of the preposition is moved to the specifier position of the Agreement Phrase.

### (23) [PP P DP ]

The problem here is that the object now precedes the preposition. In order to rectify this, Radford assumes a functional projection p (read *little p*) with an empty head to which the preposition then adjoins. This analysis is shown in Figure 4.21 on the facing page. This machinery is only necessary in order to retain the assumption that feature checking takes place in specifier-head relations. If one were to allow the preposition to determine the case of its object locally, then all this theoretical apparatus would not be necessary and it would be possible to retain the well-established structure in (23).

Sternefeld (2006: 549–550) is critical of this analysis and compares it to Swiss cheese (being full of holes). The comparison to Swiss cheese is perhaps even too positive since, unlike Swiss cheese, the ratio of substance to holes in the analysis is extreme (2 words vs. 5 empty elements). We have already seen an analysis of noun phrases on page 70, where the structure of an NP, which only consisted of an adjective *interessante* 'interesting', contained more empty elements than overt ones. The difference to the PP analysis discussed here is that empty elements are only postulated in positions where overt determiners and nouns actually occur. The little p projection, on the other hand, is motivated

Figure 4.21: PP analysis following Radford with case assignment in specifier position and little p

entirely theory-internally. There is no theory-external motivation for any of the additional assumptions made for the analysis in Figure 4.21 (see Sternefeld 2006: 549–550).

A variant of this analysis has been proposed by Hornstein, Nunes & Grohmann (2005: 124). The authors do without little p, which makes the structure less complex. They assume the structure in (24), which corresponds to the AgrOP-subtree in Figure 4.21.

(24) [AgrP DP [Agr′ P+Agr [PP t t ]]]

The authors assume that the movement of the DP to SpecAgrP happens invisibly, that is, covert. This solves Radford's problem and makes the assumption of pP redundant.

The authors motivate this analysis by pointing out agreement phenomena in Hungarian: Hungarian postpositions agree with the preceding noun phrase in person and number. That is, the authors argue that English prepositional and Hungarian postpositional phrases have the same structure derived by movement, albeit the movement is covert in English.

In this way, it is possible to reduce the number and complexity of basic operations and, in this sense, the analysis is minimal. These structures are, however, still incredibly complex. No other kind of theory discussed in this book needs the amount of inflated structure to analyze the combination of a preposition with a noun phrase. The structure in (24) cannot be motivated by reference to data from English and it is therefore impossible to acquire it from the linguistic input. A theory which assumes this kind of structures would have to postulate a Universal Grammar with the information that features can only be checked in (certain) specifier positions (see Chapters 13 and 16 for more on Universal Grammar and language acquisition). For general remarks on (covert) movement see Haider (2014: Section 2.3).

### **4.6.1.3 Locality of selection and functional projections**

Another problem arises from the use of functional heads to encode linear order. In the classic CP/IP-system and all other theories discussed here, a category stands for a class of objects with the same distribution, that is, NP (or DP) stands for pronouns and complex noun phrases. Heads select phrases with a certain category. In the CP/IP-system, I selects a VP and a DP/NP, whereas C selects an IP. In newer analyses, this kind of selectional mechanism does not work as easily. Since movement has taken place in (25b), we are dealing with a TopP or FocP in *das Buch dem Mann zu geben* 'the book the man to give'. Therefore, *um* cannot simply select an non-finite IP, but rather has to disjunctively be able to select a TopP, FocP or IP. It has to be ensured that TopPs and FocPs are marked with regard to the form of the verb contained inside them, since *um* can only be combined with *zu*-infinitives.

	- b. um for das the Buch book dem the Mann man zu to geben give 'to give the book to the man'

The category system, selectional mechanisms and projection of features would therefore have to be made considerably more complicated when compared to a system which simply base generates the orders or a system in which a constituent is moved out of the IP, thereby creating a new IP.

Proposals that follow Cinque (1999) are problematic for similar reasons: Cinque assumes the category AdverbP for the combination of an adverb and a VP. There is an empty functional head, which takes the verbal projection as its complement and the adverb surfaces in the specifier of this projection. In these systems, adverb phrases have to pass on inflectional properties of the verb since verbs with particular inflectional properties (finiteness, infinitives with *zu*, infinitives without *zu*, participles) have to be selected by higher heads (see page 185 and Section 9.1.4). There is of course the alternative to use Agree for this, but then all selection would be nonlocal and after all selection is not agreement. For further, more serious problems with this analysis like modification of adverbs by adverbs in connection with partial fronting and restrictions on non-phrasality of preverbal adverbials in English, see Haider (1997a: Section 5).

A special case of the adverb problem is the negation problem: Ernst (1992) studied the syntax of negation more carefully and pointed out that negation can attach to several different verbal projections (26a,b), to adjectives (26c) and adverbs (26d).

	- b. Ken could have not heard the news.
	- c. a [not unapproachable] figure
	- d. [Not always] has she seasoned the meat.

If all of these projections are simply NegPs without any further properties (about verb form, adjective part of speech, adverb part of speech), it would be impossible to account for their different syntactic distributions. Negation is clearly just a special case of the more general problem, since adverbs may attach to adjectives forming adjectival phrases in the traditional sense and not adverb phrases in Chinque's sense. For instance, the adverb *oft* 'often' in (27) modifies *lachender* 'laughing' forming the adjectival phrase *oft lachender*, which behaves like the unmodified adjectival participle *lachender*: it modifies *Mann* 'man' and it precedes it.

	- b. ein a oft often lachender laughing Mann man
		- 'a man that laughs often'

Of course one could imagine solutions to the last three problems that use the Agree relation to enforce selectional constraints nonlocally, but such accounts would violate locality of selection (see Ernst 1992: 110 and the discussion in Section 18.2 of this book) and would be much more complicated than accounts that assume a direct selection of dependents.

Related to the locality issues that were discussed in the previous paragraph is the assumption of special functional projections for the placement of clitics: if one uses SpeakerP so that a clitic for first person singular can be moved to the correct specifier positions and a HearerP so that the clitic for second person can be moved to the correct position (Poletto 2000: 31), then what one has are special projections which need to encode in addition all features that are relevant for clauses (alternatively one could, of course, assume nonlocal Agree to be responsible for distributional facts). In addition to these features, the category labels contain information that allows higher heads to select clauses containing clitics. In other approaches and earlier variants of transformational grammar, selection was assumed to be strictly local so that higher heads only have access to those properties of embedded categories that are directly relevant for selection (Abraham 2005: 223; Sag 2007) and not information about whether an argument of a head within the clause is the speaker or the hearer or whether some arguments in the clause are realized as clitics. Locality will be discussed further in Section 18.2.

### **4.6.1.4 Feature-driven movement**

Finally, there is a conceptual problem with feature-driven movement, which has been pointed out by Gisbert Fanselow: Frey (2004b: 27) assumes a KontrP (contrastive phrase) and Frey (2004a) a TopP (topic phrase) (see Rizzi (1997) for TopP and FocP (focus phrase) in Italian and Haftka (1995), Grewendorf (2002: 85, 240); 2009, Abraham (2003: 19), Laenzlinger (2004: 224) and Hinterhölzl (2004: 18) for analyses of German with TopP and/or FocP). Constituents have to move to the specifier of these functional heads depending on

their information structural status. Fanselow (2003a) has shown that such movementbased theories for the ordering of elements in the middle field are not compatible with current assumptions of the Minimalist Program. The reason for this is that sometimes movement takes place in order to create space for other elements (altruistic movement). If the information structure of a sentence requires that the closest object to a verb is neither focused nor part of the focus, then the object closest to the verb should not receive the main stress in the clause. This can be achieved by deaccentuation, that is, by moving the accent to another constituent or even, as shown in (28b), by moving the object to a different position from the one in which it receives structural stress.

	- b. dass that die the Polizei police Linguisten linguists gestern yesterday verhaftete arrested 'that the police arrested linguists yesterday'

In Spanish, partial focus can be achieved not by special intonation, but rather only by altruistic movement in order to move the object out of the focus. See also Bildhauer & Cook (2010: p. 72) for a discussion of "altruistic" multiple frontings in German.

It is therefore not possible to assume that elements are moved to a particular position in the tree in order to check some feature motivated by information structural properties. Since feature checking is a prerequisite for movement in current minimalist theory, one would have to postulate a special feature, which only has the function of triggering altruistic movement. Fanselow (2003a: Section 4; 2006: 8) has also shown that the ordering constraints that one assumes for topic, focus and sentence adverbs can be adequately described by a theory which assumes firstly, that arguments are combined (in minimalist terminology: *merged*) with their head one after the other and secondly, that adjuncts can be adjoined to any projection level. The position of sentence adverbs directly before the focused portion of the sentence receives a semantic explanation: since sentence adverbs behave like focus-sensitive operators, they have to directly precede elements that they refer to. It follows from this that elements which do not belong to the focus of an utterance (topics) have to occur in front of the sentence adverb. It is therefore not necessary to assume a special topic position to explain local reorderings in the middle field. This analysis is also pursued in LFG and HPSG. The respective analyses are discussed in more detail in the corresponding chapters.

## **4.6.2 Labeling**

In the Minimalist Program, Chomsky tries to keep combinatorial operations and mechanisms as simple as possible. He motivates this with the assumption that the existence of a UG with less language-specific knowledge is more plausible from a evolutionary point of view than a UG which contains a high degree of language-specific knowledge (Chomsky 2008: 135).

For this reason, he removes the projection levels of X theory, traces, indices and "similar descriptive technology" (Chomsky 2008: 138). All that remains is Merge and Move, that is, Internal and External Merge. Internal and External Merge combine two syntactic objects and into a larger syntactic object which is represented as a set { , }. and can be either lexical items or internally complex syntactic objects. Internal Merge moves a part of an object to its periphery.<sup>15</sup> The result of internally merging an element is a set { , } where was a part of . External Merge also produces a set with two elements. However, two independent objects are merged. The objects that are created by Merge have a certain category (a set of features). For instance, if one combines the elements and , one gets { l, { , } }, where l is the category of the resulting object. This category is also called a *label*. Since it is assumed that all constituents are headed, the category that is assigned to { , } has to be either the category of or the category of . Chomsky (2008: 145) discusses the following two rules for the determination of the label of a set.

	- b. If is internally merged to , forming { , } then the label of is the label of { , }.

As Chomsky notes, these rules are not unproblematic since the label is not uniquely determined in all cases. An example is the combination of two lexical elements. If both H and in (29a) are lexical items (LI), then both H and can be the label of the resulting structure. Chomsky notices that this could result in deviant structures, but claims that this concern is unproblematic and ignores it. Chomsky offered a treatment of the combination of two lexical items in his 2013 paper. The solution to the problem is to assume that all combinations of lexical elements consist of a functional element and a root (Marantz 1997, Borer 2005). Roots are not considered as labels per definition<sup>16</sup> and hence the category of the functional element determines the category of the combination (Chomsky 2013: 47). Such an analysis can only be rejected: the goal of the Minimalist Program is to simplify the theoretical proposals to such an extent that the models of language acquisition and language evolution become plausible, but in order to simplify basic concepts it is stipulated that a noun cannot simply be a noun but needs a functional element to tell the noun what category it has. Given that the whole point of Chomsky's Bare Phrase Structure (Chomsky 1995a) was the elimination of the unary branching structures in X theory, it is unclear why they are reintroduced now through the backdoor, only more complex with an additional empty element.<sup>17</sup> Theories like Categorial Grammar

(i) a. N′ → N

b. N → N-func root

<sup>15</sup>To be more specific, part of a syntactic object is copied and the copy is placed at the edge of the entire object. The original of this copy is no longer relevant for pronunciation (*Copy Theory of Movement*).

<sup>16</sup>Another category that is excluded as label per definition is *Conj*, which stands for conjunction (Chomsky 2013: 45–46). This is a stipulation that is needed to get coordination to work. See below.

<sup>17</sup>The old X rule in (i.a) corresponds to the binary combination in (i.b).

In (i.a) a lexical noun is projected to an N′ and in (i.b), a root is combined with a functional nominal head into a nominal category.

and HPSG can combine lexical items directly without assuming any auxiliary projections or empty elements. See also Rauh (2016) for a comparison of the treatment of syntactic categories in earlier versions of Transformational Grammar, HPSG, Construction Grammar, Role and Reference Grammar and root-based Neo-Constructivist proposals like the one assumed by Chomsky (2013). Rauh concludes that the direct connection of syntactic and semantic information is needed and that the Neo-Constructivism of Marantz and Borer has to be rejected. For further criticism of Neo-Constructivist approaches see Wechsler (2008a) and Müller & Wechsler (2014a: Sections 6.1 and 7).

The combination of a pronoun with a verbal projection poses a problem that is related to what has been said above. In the analysis of *He left*, the pronoun *he* is a lexical element and hence would be responsible for the label of *He left*, since *left* is an internally complex verbal projection in Minimalist theories. The result would be a nominal label rather than a verbal one. To circumvent this problem, Chomsky (2013: 46) assumes that *he* has a complex internal structure: 'perhaps D-pro', that is, *he* is (perhaps) composed out of an invisible determiner and a pronoun.

The case in which two non-LIs are externally merged (for instance a nominal and a verbal phrase) is not discussed in Chomsky (2008). Chomsky (2013: 43–44) suggests that a phrase XP is irrelevant for the labeling of { XP, YP } if XP is moved (or rather copied in the Copy Theory of Movement) in a further step. Chomsky assumes that one of two phrases in an { XP, YP } combination has to move, since otherwise labeling would be impossible (p. 12).<sup>18</sup> The following coordination example will illustrate this: Chomsky assumes that the expression *Z and W* is analyzed as follows: first, Z and W are merged. This expression is combined with Conj (30a) and in the next step Z is raised (30b).

	- b. [ Z [ Conj [ Z W]]

Since Z in is only a copy, it does not count for labeling and can get the label of W. It is stipulated for the combination of Z and that Conj cannot be the label and hence the label of the complete structure is Z.<sup>19</sup>

	- b. \* both Kim or Lee

<sup>18</sup>His explanation is contradictory: on p. 11 Chomsky assumes that a label of a combination of two entities with the same category is this category. But in his treatment of coordination, he assumes that one of the conjuncts has to be raised, since otherwise the complete structure could not be labeled.

<sup>19</sup>As Bob Borsley (p.c. 2013) pointed out to me, this makes wrong predictions for coordinations of two singular noun phrases with *and*, since the result of the coordination is a plural DP and not a singular one like the first conjunct. Theories like HPSG can capture this by grouping features in bundles that can be shared in coordinated structures (syntactic features and nonlocal features, see Pollard & Sag (1994: 202)).

Furthermore the whole account cannot explain why (i.b) is ruled out.

The information about the conjunction has to be part of the representation for *or Lee* in order to be able to contrast it with *and Lee*.

A further problem is that the label of should be the label of W since Conj does not count for label determination. This would lead to a situation in which we have to choose between Z and W to determine the label of . Following Chomsky's logic, either Z or W would have to move on to make it possible to label . Chomsky (2013) mentions this problem in footnote 40, but does not provide a solution.

A special case that is discussed by Chomsky is the Internal Merge of an LI with a non LI . According to rule (29a) the label would be . According to (29b), the label would be (see also Donati (2006)). Chomsky discusses the combination of the pronoun *what* with *you wrote* as an example.

(31) what [ C [you wrote *t*]]

If the label is determined according to (29b), one then has a syntactic object that would be called a CP in the GB framework; since this CP is, moreover, interrogative, it can function as the complement of *wonder* as in (32a). If the label is determined according to (29a), one gets an object that can function as the accusative object of *read* in (32b), that is, something that corresponds to a DP in GB terminology.

	- b. I read what you wrote.

*what you wrote* in (32b) is a so-called free relative clause.

Chomsky's approach to free relative clauses is interesting but is unable to describe the phenomenon in full breadth. The problem is that the phrase that contains the relative pronoun may be complex (contrary to Donati's claims, see also Citko (2008: 930–932) for a rejection of Donati's claim).<sup>20</sup> (33) provides an English example from Bresnan & Grimshaw (1978: 333). German examples from Bausewein (1991: 155) and Müller (1999a: 78) are given in (34).

	- b. [*Wessen* whose Birne] bulb/head noch yet halbwegs halfway in in der the Fassung socket steckt, is pflegt uses solcherlei such Erloschene extinct zu to meiden;<sup>22</sup> avoid

'Those who still have their wits half way about them tend to avoid such vacant characters;'

c. [*Wessen* whose Schuhe] shoes "danach" after.that besprenkelt speckled sind, are hat has keinen no Baum tree gefunden found und and war nicht zu einem Bogen in der Lage.<sup>23</sup>

was not to a bow in the position

'Those whose shoes are spattered afterwards couldn't find a tree and were incapable of peeing in an arc.'

<sup>20</sup>Chomsky (2013: 47) admits that there are many open questions as far as the labeling in free relative clauses is concerned and hence admits that there remain many open questions with labeling as such.

<sup>21</sup>Bausewein (1991: 155).

<sup>22</sup>Thomas Gsella, taz, 12.02.1997, p. 20.

<sup>23</sup>taz, taz mag, 08./09.08.1998, p. XII.

Since *wessen Schuhe* 'whose shoes' is not a lexical item, rule (29b) has to be applied, provided no additional rules are assumed to deal with such cases. This means that the whole free relative clause *wessen Schuhe danach besprenkelt sind* is labeled as CP. For the free relatives in (33) and (34) the labeling as a CP is an unwanted result, since they function as subjects or objects of the matrix predicates and hence should be labelled DP. However, since *wessen Schuhe* is a complex phrase and not a lexical item, (29a) does not apply and hence there is no analysis of the free relative clause as a DP. Therefore, it seems one must return to something like the GB analysis proposed by Groos & van Riemsdijk (1981), at least for the German examples. Groos & van Riemsdijk assume that free relatives consist of an empty noun that is modified by the relative clause like a normal noun. In such an approach, the complexity of the relative phrase is irrelevant. It is only the empty head that is relevant for labeling the whole phrase.<sup>24</sup> However, once empty heads are countenanced in the analysis, the application of (29a) to (31) is undesirable since the application would result in two analyses for (32b): one with the empty nominal head and one in which (31) is labeled as DP directly. One might argue that in the case of several possible derivations, the most economical one wins, but the assumption of transderivational constraints leads to undesired consequences (Pullum 2013: Section 5).

(i) Sie she kocht, cooks worauf where.on sie she Appetit appetite hat. has 'She cooks what she feels like eating.'

See Müller (1999a: 60–62) for corpus examples.

Minimalist theories do not employ unary projections. Ott (2011) develops an analysis in which the category of the relative phrase is projected, but he does not have a solution for nonmatching free relative clauses (p. 187). The same is true for Citko's analysis, in which an internally merged XP can provide the label.

Many other proposals for labeling or, rather, non-labeling exist. For instance, some Minimalists want to eliminate labeling altogether and argue for a label-free syntax. As was pointed out by Osborne, Putnam & Groß (2011), such analyses bring Minimalism closer to Dependency Grammar. It is unclear how any of these models could deal with non-matching free relative clauses. Groß & Osborne (2009: Section 5.3.3) provide an analysis of free relatives in their version of Dependency Grammar, but deny the existence of nonmatching ones (p. 78). They suggest an analysis in which the relative phrase is the root/label of the free relative clause and hence they have the same problem as Minimalist proposals have with non-matching free relative clauses. As Groß & Osborne (2009: 73) and Osborne et al. (2011: 327) state: empty heads are usually not assumed in (their version of) Dependency Grammar. Neither are unary branching projections. This seems to make it impossible to state that free relative clauses with a relative phrase YP can function as XP, provided XP is a category that is higher in the obliqueness hierarchy of Keenan & Comrie (1977), a generalization that was discovered by Bausewein (1991) (see also Müller 1999a: 60–62 and Vogel 2001: 4). In order to be able to express the relevant facts, an element or a label has to exist that is different from the label of *worauf* in (i).

<sup>24</sup>Assuming an empty head is problematic since it may be used as an argument only in those cases in which it is modified by an adjunct, namely the relative clause (Müller 1999a: 97). See also Ott (2011: 187) for a later rediscovery of this problem. It can be solved in HPSG by assuming a unary projection that projects the appropriate category from a relative clause. I also use the unary projection to analyze so-called *nonmatching* free relative clauses (Müller 1999a). In constructions with nonmatching free relative clauses, the relative clause fills an argument slot that does not correspond to the properties of the relative phrase (Bausewein 1991). Bausewein discusses the following example, in which the relative phrase is a PP but the free relative fills the accusative slot of *kocht* 'cooks'.

Chomsky (2013) abandons the labeling condition in (29b) and replaces it with general labeling rules that hold for both internal and external Merge of two phrases. He distinguishes two cases. In the first case, labeling becomes possible since one of the two phrases of the set { XP, YP } is moved away. This case was already discussed above. Chomsky writes about the other case: *X and Y are identical in a relevant respect, providing the same label, which can be taken as the label of the SO* (p. 11). He sketches an analysis of interrogative clauses on p. 13 in which the interrogative phrase has a Q feature and the remaining sentence from which the Q phrase was extracted has a Q feature as well. Since the two constituents share this property, the label of the complete clause will be Q. This kind of labeling will "perhaps" also be used for labeling normal sentences consisting of a subject and a verb phrase agreeing in person and number. These features would be responsible for the label of the sentence. The exact details are not worked out, but almost certainly will be more complex than (29b).

A property that is inherent in both Chomsky (2005) and Chomsky (2013) is that the label is exclusively determined from one of the merged objects. As Bob Borsley pointed out to me, this is problematic for interrogative/relative phrases like (35).

(35) with whom

The phrase in (35) is both a prepositional phrase (because the first word is a preposition) and an interrogative/relative phrase (because the second word is an interrogative/relative word). So, what is needed for the correct labeling of PPs like the one in (35) is a well-defined way of percolating different properties from daughters to the mother node.<sup>25</sup>

For further problems concerning labeling and massive overgeneration by recent formulations of Merge see Fabregas et al. (2016).

Summarizing, one can say that labeling, which was introduced to simplify the theory and reduce the amount of language specific innate knowledge that has to be assumed, can only be made to function with a considerable amount of stipulations. For instance, the combination of lexical elements requires the assumption of empty functional heads, whose only purpose is determining the syntactic category of a certain lexical element. If this corresponded to linguistic reality, knowledge about labeling, the respective functional categories, and information about those categories that have to be ignored for the labeling would have to be part of innate language specific knowledge and nothing

<sup>25</sup>HPSG solves this problem by distinguishing head features including part of speech information and nonlocal features containing information about extraction and interrogative/relative elements. Head features are projected from the head, the nonlocal features of a mother node are the union of the nonlocal features of the daughters minus those that are bound off by certain heads or in certain configurations.

Citko (2008: 926) suggests an analysis in which both daughters can contribute to the mother node. The result is a complex label like { P, { D, N } }. This is a highly complex data structure and Citko does not provide any information on how the relevant information that it contains is accessed. Is an object with the label { P, { D, N } } a P, a D or an N? One could say that P has priority since it is in the least embedded set, but D and N are in one set. What about conflicting features? How does a preposition that selects for a DP decide whether { D, N } is a D or an N? In any case it is clear that a formalization will involve recursive relations that dig out elements of subsets in order to access their features. This adds to the overall complexity of the proposal and is clearly dispreferred over the HPSG solution, which uses one part of speech value per linguistic object.

would be gained. One would be left with bizarre analyses with an enormous degree of complexity without having made progress in the Minimalist direction. Furthermore, there are empirical problems and a large number of unsolved cases.

The conclusion is that the label of a binary combination should not be determined in the ways suggested by Chomsky (2008, 2013). An alternative option for computing the label is to use the functor of a functor argument structure as the label (Berwick & Epstein 1995: 145). This is the approach taken by Categorial Grammar (Ajdukiewicz 1935, Steedman 2000) and in Stabler's Minimalist Grammars (2011b).<sup>26</sup> Stabler's formalization of Merge will be discussed in Section 4.6.4.

### **4.6.3 Specifiers, complements, and the remains of X theory**

Chomsky (2008: 146) assumes that every head has exactly one complement but an arbitrary number of specifiers. In standard X theory, the restriction that there can be at most one complement followed from the general X schema and the assumption that structures are at most binary branching: in standard X theory a lexical head was combined with all its complements to form an X′ . If there are at most two daughters in a phrase, it follows that there can be only one complement (Sentences with ditransitive verbs have been analyzed with an empty head licensing an additional argument; see Larson (1988) for the suggestion of an empty verbal head and Müller & Wechsler (2014a: Sections 6.1 and 7) for a critical assessment of approaches involving little *v*). In standard X theory there was just one specifier. This restriction has now been abandoned. Chomsky writes that the distinction between specifier and complement can now be derived from the order in which elements are merged with their head: elements that are *first-merged* are complements and all others – those which are *later-merged* – are specifiers.

Such an approach is problematic for sentences with monovalent verbs: according to Chomsky's proposal, subjects of monovalent verbs would not be specifiers but complements.<sup>27</sup> This problem will be discussed in more detail in Section 4.6.4.

<sup>26</sup>For the Categorial Grammar approach to work, it is necessary to assign the category x/x to an adjunct, where x stands for the category of the head to which the adjunct attaches. For instance, an adjective combines with a nominal object to form a nominal object. Therefore its category is n/n rather than adj.

Similarly, Stabler's approach does not extend to adjuncts unless he is willing to assign the category noun to attributive adjectives. One way out of this problem is to assume a special combination operation for adjuncts and their heads (see Frey & Gärtner 2002: Section 3.2). Such a combination operation is equivalent to the Head-Adjunct Schema of HPSG.

<sup>27</sup>Pauline Jacobson (p.c. 2013) pointed out that the problem with intransitive verbs could be solved by assuming that the last-merged element is the specifier and all non-last-merged elements are complements. This would solve the problems with intransitive verbs and with the coordination of verbs in (36) but it would not solve the problem of coordination in head-final languages as in (39). Furthermore, current Minimalist approaches make use of multiple specifiers and this would be incompatible with the Jacobsonian proposal unless one would be willing to state more complicated restrictions on the status of non-first-merged elements.

Apart from this, theories assuming that syntactic objects merged with word groups are specifiers do not allow for analyses in which two lexical verbs are directly coordinated, as in (36):<sup>28</sup>

(36) He [knows and loves] this record.

For example, in an analysis suggested by Steedman (1991: 264), *and* (being the head) is first merged with *loves* and then the result is merged with *knows*. The result of this combination is a complex object that has the same syntactic properties as the combined parts: the result is a complex verb that needs a subject and an object. After the combination of the conjunction with the two verbs, the result has to be combined with *this record* and *he*. *this record* behaves in all relevant respects like a complement. Following Chomsky's definition, however, it should be a specifier, since it is combined with the third application of Merge. The consequences are unclear. Chomsky assumes that Merge does not specify constituent order. According to him, the linearization happens at the level of Phonological Form (PF). The restrictions that hold there are not described in his recent papers. However, if the categorization as complement or specifier plays a role for linearization as in Kayne's work (2011: 2, 12) and in Stabler's proposal (see Section 4.6.4), *this record* would have to be serialized before *knows and loves*, contrary to the facts. This means that a Categorial Grammar-like analysis of coordination is not viable and the only remaining option would seem to assume that *knows* is combined with an object and then two VPs are coordinated. Kayne (1994: 61, 67) follows Wexler & Culicover (1980: 303) in suggesting such an analysis and assumes that the object in the first VP is deleted. However, Borsley (2005: 471) shows that such an analysis makes wrong predictions, since (37a) would be derived from (37b) although these sentences differ in meaning.<sup>29</sup>

(37) a. Hobbs whistled and hummed the same tune.

b. Hobbs whistled the same tune and hummed the same tune.

Another innovation of Chomsky's 2013 paper is that he eliminates the concept of specifier. He writes in footnote 27 on page 43: *There is a large and instructive literature on problems with Specifiers, but if the reasoning here is correct, they do not exist and the problems are unformulable.* This is correct, but this also means that everything that was explained with reference to the notion of specifier in the Minimalist framework until now does not have an explanation any longer. If one follows Chomsky's suggestion, a large part of the linguistic research of the past years becomes worthless and has to be redone.

Chomsky did not commit himself to a particular view on linearization in his earlier work, but somehow one has to ensure that the entities that were called specifier are realized in a position in which constituents are realized that used to be called specifier. This means that the following remarks will be relevant even under current Chomskyan assumptions.

<sup>28</sup>Chomsky (2013: 46) suggests the coordination analysis in (30): according to this analysis, the verbs would be merged directly and one of the verbs would be moved around the conjunction in a later step of the derivation. As was mentioned in the previous section, such analyses do not contribute to the goal of making minimal assumptions about innate language specific knowledge since it is absolutely unclear how such an analysis of coordination would be acquired by language learners. Hence, I will not consider this coordination analysis here.

<sup>29</sup>See also Bartsch & Vennemann (1972: 102), Jackendoff (1977: 192–193), Dowty (1979: 143), den Besten (1983: 104–105), Klein (1985: 8–9) and Eisenberg (1994b) for similar observations and criticism of similar proposals in earlier versions of Transformational Grammar.

Since semantic interpretation cannot see processes such as deletion that happen at the level of Phonological Form (Chomsky 1995b: Chapter 3), the differences in meaning cannot be explained by an analysis that deletes material.

In a further variant of the VP coordination analysis, there is a trace that is related to *this record*. This would be a *Right-Node-Raising* analysis. Borsley (2005) has shown that such analyses are problematic. Among the problematic examples that he discusses is the following pair (see also Bresnan 1974: 615).

(38) a. He tried to persuade and convince him.

b. \* He tried to persuade, but couldn't convince, him.

The second example is ungrammatical if *him* is not stressed. In contrast, (38a) is wellformed even with unstressed *him*. So, if (38a) were an instance of Right-Node-Raising, the contrast would be unexpected. Borsley therefore excludes a Right-Node-Raising analysis.

The third possibility to analyze sentences like (36) assumes discontinuous constituents and uses material twice: the two VPs *knows this record* and *loves this record* are coordinated with the first VP being discontinuous. (See Crysmann (2001) and Beavers & Sag (2004) for such proposals in the framework of HPSG.) However, discontinuous constituents are not usually assumed in the Minimalist framework (see for instance Kayne (1994: 67)). Furthermore, Abeillé (2006) showed that there is evidence for structures in which lexical elements are coordinated directly. This means that one needs analyses like the CG analysis discussed above, which would result in the problems with the specifier/complement status just discussed.

Furthermore, Abeillé has pointed out that NP/DP coordinations in head-final languages like Korean and Japanese present difficulties for Merge-based analyses. (39) shows a Japanese example.

(39) Robin-to Robin-and Kim Kim 'Kim and Robin'

In the first step *Robin* is merged with *to*. In a second step *Kim* is merged. Since *Kim* is a specifier, one would expect that *Kim* is serialized before the head as it is the case for other specifiers in head-final languages.

Chomsky tries to get rid of the unary branching structures of standard X theory, which were needed to project lexical items like pronouns and determiners into full phrases, referring to work by Muysken (1982). Muysken used the binary features min and max to classify syntactic objects as minimal (words or word-like complex objects) or maximal (syntactic objects that stand for complete phrases). Such a feature system can be used to describe pronouns and determiners as [+min, +max]. Verbs like *give*, however, are classified as [+min, −max]. They have to project in order to reach the [+max]-level. If specifiers and complements are required to be [+max], then determiners and pronouns fulfill this requirement without having to project from X<sup>0</sup> via X′ to the XP-level.

In Chomsky's system, the min/max distinction is captured with respect to the completeness of heads (complete = phrase) and to the property of being a lexical item. However, there is a small but important difference between Muysken's and Chomsky's proposal: the predictions with regard to the coordination data that was discussed above. Within the category system of X theory, it is possible to combine two X<sup>0</sup> s to get a new, complex X<sup>0</sup> . This new object has basically the same syntactic properties that simple X<sup>0</sup> s have (see Jackendoff 1977: 51 and Gazdar, Klein, Pullum & Sag 1985). In Muysken's system, the coordination rule (or the lexical item for the conjunction) can be formulated such that the coordination of two +min items is a +min item. In Chomsky's system an analogous rule cannot be defined, since the coordination of two lexical items is not a lexical item any longer.

Like Chomsky in his recent Minimalist work, Categorial Grammar (Ajdukiewicz 1935) and HPSG (Pollard and Sag 1987; 1994: 39–40) do not (strictly) adhere to X theory. Both theories assign the symbol NP to pronouns (for CG see Steedman & Baldridge (2006: p. 615), see Steedman (2000: Section 4.4) for the incorporation of lexical type raising in order to accommodate quantification). The phrase *likes Mary* and the word *sleeps* have the same category in Categorial Grammar (s\np). In both theories it is not necessary to project a noun like *tree* from N<sup>0</sup> to N in order to be able to combine it with a determiner or an adjunct. Determiners and monovalent verbs in controlled infinitives are not projected from an X<sup>0</sup> level to the XP level in many HPSG analyses, since the valence properties of the respective linguistic objects (an empty subcat or comps list) are sufficient to determine their combinatorial potential and hence their distribution (Müller 1996d; Müller 1999b). If the property of being minimal is needed for the description of a phenomenon, the binary feature lex is used in HPSG (Pollard and Sag 1987: 172; 1994: 22). However, this feature is not needed for the distinction between specifiers and complements. This distinction is governed by principles that map elements of an argument structure list (arg-st) onto valence lists that are the value of the specifier and the complements feature (abbreviated as spr and comps respectively).<sup>30</sup> Roughly speaking, the specifier in a verbal projection is the least oblique argument of the verb for configurational languages like English. Since the argument structure list is ordered according to the obliqueness hierarchy of Keenan & Comrie (1977), the first element of this list is the least oblique argument of a verb and this argument is mapped to the spr list. The element in the spr list is realized to the left of the verb in SVO languages like English. The elements in the comps list are realized to the right of their head. Approaches like the one by Ginzburg & Sag (2000: 34, 364) that assume that head-complement phrases combine a word with its arguments have the same problem with coordinations like (36) since the head of the VP is not a word.<sup>31</sup> However, this restriction for the head can be replaced by one that refers to the lex feature rather than to the property of being a word or lexical item.

<sup>30</sup>Some authors assume a three-way distinction between subjects, specifiers, and complements.

<sup>31</sup>As mentioned above, a multidomination approach with discontinuous constituents is a possible solution for the analysis of (36) (see Crysmann 2001 and Beavers & Sag 2004). However, the coordination of lexical items has to be possible in principle as Abeillé (2006) has argued. Note also that the HPSG approach to coordination cannot be taken over to the MP. The reason is that the HPSG proposals involve special grammar rules for coordination and MP comes with the claim that there is only Merge. Hence the additional introduction of combinatorial rules is not an option within the MP.

#### 4 Transformational Grammar – Minimalism

Pollard & Sag as well as Sag & Ginzburg assume flat structures for English. Since one of the daughters is marked as lexical, it follows that the rule does not combine a head with a subset of its complements and then apply a second time to combine the result with further complements. Therefore, a structure like (40a) is excluded, since *gave John* is not a word and hence cannot be used as the head daughter in the rule.

	- b. [gave John a book]

Instead of (40a), only analyses like (40b) are admitted; that is, the head is combined with all its arguments all in one go. The alternative is to assume binary branching structures (Müller 2015a; Müller & Ørsnes 2015: Section 1.2.2). In such an approach, the head complement schema does not restrict the word/phrase status of the head daughter. The binary branching structures in HPSG correspond to External Merge in the MP.

In the previous two sections, certain shortcomings of Chomsky's labeling definition and problems with the coordination of lexical items were discussed. In the following section, I discuss Stabler's definition of Merge in Minimalist Grammar, which is explicit about labeling and in one version does not have the problems discussed above. I will show that his formalization corresponds rather directly to HPSG representations.

### **4.6.4 Minimalism, Categorial Grammar, and HPSG**

In this section, I will relate Minimalism, Categorial Grammar and HPSG to one another. This section is based on Müller (2013c). Readers who are not yet familiar with Categorial Grammar and HPSG should skim this section or consult the Chapters 6, 8 and 9 and return here afterwards.

In Section 4.6.2, it was shown that Chomsky's papers leave many crucial details about labeling unspecified. Stabler's work is relatively close to recent Minimalist approaches but is worked out much more precisely (see also Stabler (2011a: 397, 399, 400) on formalization of post GB approaches). Stabler (2001) shows how Kayne's theory of remnant movement can be formalized and implemented. Stabler refers to his particular way of formalizing Minimalist theories as *Minimalist Grammars* (MG). There are a number of interesting results with regard to the weak capacity of Minimalist Grammars and variants thereof (Michaelis 2001). It has been shown, for instance, that the number of possible languages one could create with MGs includes the set of those which can be created by Tree Adjoining Grammars (see Chapter 12). This means that it is possible to assign a greater number of word strings to structures with MGs, however, the structures derived by MGs are not necessarily always the same as the structures created by TAGs. For more on the generative capacity of grammars, see Chapter 17.

Although Stabler's work can be regarded as a formalization of Chomsky's Minimalist ideas, Stabler's approach differs from Chomsky's in certain matters of detail. Stabler assumes that the results of the two Merge operations are not sets but pairs. The head in a pair is marked by a pointer ('<' or '>'). Bracketed expressions like { , { , } } (discussed in Section 4.6.2) are replaced by trees like the one in (41).

1 is the head in (41), 2 is the complement and 3 the specifier. The pointer points to the part of the structure that contains the head. The daughters in a tree are ordered, that is, 3 is serialized before 1 and 1 before 2.

Stabler (2011a: 402) defines External Merge as follows:

$$\begin{array}{rcl} \text{(42)} & \text{em(t}\_1 \text{[=f]}, \text{ t}\_2 \text{[f])} = \begin{pmatrix} \text{\text{\textquotedblleft}} \\ \text{\textquotedblright} \\ \text{t}\_1 & \text{t}\_2 \end{pmatrix} \\ & \text{>} \\ & \text{\textquotedblleft} \\ & \text{\textquotedblleft} \\ \text{t}\_2 & \text{t}\_1 & \text{otherwise} \end{pmatrix} \end{array}$$

=f is a selection feature and f the corresponding category. When t<sup>1</sup> [=f] and t<sup>2</sup> [f] are combined, the result is a tree in which the selection feature of t<sup>1</sup> and the respective category feature of t<sup>2</sup> are deleted. The upper tree in (42) represents the combination of a (lexical) head with its complement. t<sup>1</sup> is positioned before t<sup>2</sup> . The condition that t<sup>1</sup> has to have exactly one node corresponds to Chomsky's assumption that the first Merge is a Merge with a complement and that all further applications of Merge are Merges with specifiers (Chomsky 2008: 146).

Stabler defines Internal Merge as follows:<sup>32</sup>

$$\begin{aligned} \text{(43)} \quad \begin{aligned} \text{im(t}\_1 \text{[+f]} \text{)} &= \\ &< \\ \text{t}\_2^> \quad \text{t}\_1 \text{[t}\_2 \text{[-f]}^> \leftrightarrow \epsilon \end{aligned} $$

t1 is a tree with a subtree t<sup>2</sup> which has the feature f with the value '−'. This subtree is deleted (t<sup>2</sup> [−f]<sup>&</sup>gt; ↦→ ) and a copy of the deleted subtree without the −f feature (t<sup>&</sup>gt; 2 ) is positioned in specifier position. The element in specifier position has to be a maximal projection. This requirement is visualized by the raised '>'.

Stabler provides an example derivation for the sentence in (44).

(44) who Marie praises

<sup>32</sup>In addition to what is shown in (43), Stabler's definition contains a variant of the *Shortest Move Constraint* (SMC), which is irrelevant for the discussion at hand and hence will be omitted.

*praises* is a two-place verb with two =D features. This encodes the selection of two determiner phrases. *who* and *Marie* are two Ds and they fill the object and subject position of the verb. The resulting verbal projection *Marie praises who* is embedded under an empty complementizer which is specified as +wh and hence provides the position for the movement of *who*, which is placed in the specifier position of CP by the application of Internal Merge. The −wh feature of *who* is deleted and the result of the application of Internal Merge is *who Marie praises*.

This analysis has a problem that was pointed out by Stabler himself in unpublished work cited by Veenstra (1998: 124): it makes incorrect predictions in the case of monovalent verbs. If a verb is combined with an DP, the definition of External Merge in (42) treats this DP as a complement<sup>33</sup> and serializes it to the right of the head. Instead of analyses of sentences like (45a) one gets analyses of strings like (45b).

	- b. \* Sleeps Max.

To solve this problem, Stabler assumes that monovalent verbs are combined with a nonovert object (see Veenstra (1998: 61, 124), who, quoting Stabler's unpublished work, also adopts this solution). With such an empty object, the resulting structure contains the empty object as a complement. The empty object is serialized to the right of the verb and *Max* is the specifier and hence serialized to the left of the verb as in (46).

(46) Max sleeps \_.

Of course, any analysis of this kind is both stipulative and entirely ad hoc, being motivated only by the wish to have uniform structures. Moreover, it exemplifies precisely one of the methodological deficiencies of Transformational Generative Grammar discussed at length by Culicover & Jackendoff (2005: Section 2.1.2): the excessive appeal to uniformity.

An alternative is to assume an empty verbal head that takes *sleeps* as complement and *Max* as subject. Such an analysis is often assumed for ditransitive verbs in Minimalist theories which assume Larsonian verb shells (Larson 1988). Larsonian analyses usually assume that there is an empty verbal head that is called little *v* and that contributes a causative meaning. As was discussed in Section 4.1.4, Adger (2003) adopts a little *v*-based analysis for intransitive verbs. Omitting the TP projection, his analysis is provided in Figure 4.22 on the next page. Adger argues that the analysis of sentences with unergative verbs involves a little *v* that selects an agent, while the analysis of unaccusative verbs involves a little *v* that does not select an N head. For unaccusatives, he assumes that the verb selects a theme. He states that little *v* does not necessarily have a causative meaning but introduces the agent. But note that in the example at hand the subject of *sleep* is neither causing an event, nor is it necessarily deliberately doing something. So it is rather an undergoer than an agent. This means that the assumption of the empty *v* head is made for purely theory-internal reasons without any semantic motivation in the

<sup>33</sup>Compare also Chomsky's definition of specifier and complement in Section 4.6.3.

Figure 4.22: Little *v*-based analysis of *Max sleeps*

case of intransitives. If the causative contribution of little *v* in ditransitive constructions is assumed, this would mean that one needs two little *v*s, one with and one without a causative meaning. In addition to the lack of theory-external motivation for little *v*, there are also empirical problems for such analyses (for instance with coordination data). The reader is referred to Müller & Wechsler (2014a: Sections 6.1 and 7) for further details.

Apart from the two operations that were defined in (42) and (43), there are no other operations in MG.<sup>34</sup> Apart from the problems with monovalent verbs, this results in the problem that was discussed in Section 4.6.3: there is no analysis with a direct combination of verbs for (36) – repeated here as (47).

(47) He [knows and loves] this record.

The reason is that the combination of *knows*, *and* and *loves* consists of three nodes and the Merge of *knows and loves* with *this record* would make *this record* the specifier of the structure. Therefore *this record* would be serialized before *knows and loves*, contrary to the facts. Since the set of languages that can be generated with MGs contains the languages that can be generated with certain TAGs and with Combinatorial Categorial Grammar (Michaelis 2001), the existence of a Categorial Grammar analysis implies that the coordination examples can be derived in MGs somehow. But for linguists, the fact that it is possible to generate a certain string at all (the weak capacity of a grammar) is of less significance. It is the actual structures that are licensed by the grammar that are important (the strong capacity).

### **4.6.4.1 Directional Minimalist Grammars and Categorial Grammar**

Apart from reintroducing X<sup>0</sup> categories, the coordination problem can be solved by changing the definition of Merge in a way that allows heads to specify the direction of combination with their arguments: Stabler (2011b: 635) suggests marking the position of an argument relative to its head together with the selection feature and gives the following redefinition of External Merge.

<sup>34</sup>For extensions see Frey & Gärtner (2002: Section 3.2).

4 Transformational Grammar – Minimalism

$$\begin{aligned} \text{(48)} \quad \text{em(t}\_1[\alpha], \text{t}\_2[\mathbf{x}]) &= \begin{cases} < \\ \bigwedge \text{\textquotedbl{}C\textquotedbl{}} \\ \text{t}\_1 & \text{t}\_2 \\ > \\ \bigwedge \text{\textquotedbl{}C\textquotedbl{}} \\ \bigwedge \text{\textquotedbl{}}\_2 & \text{t}\_1 \end{cases} & \text{if } \alpha \text{ is = x} \end{cases} \end{aligned}$$

 The position of the equal sign specifies on which side of the head an argument has to be realized. This corresponds to forward and backward Application in Categorial Grammar (see Section 8.1.1). Stabler calls this form of grammar Directional MG (DMG). This variant of MG avoids the problem with monovalent verbs and the coordination data is unproblematic as well if one assumes that the conjunction is a head with a variable category that selects for elements of the same category to the left and to the right of itself. *know* and *love* would both select an object to the right and a subject to the left and this requirement would be transferred to *knows and loves*. <sup>35</sup> See Steedman (1991: 264) for the details of the CG analysis and Bouma & van Noord (1998: 52) for an earlier HPSG proposal involving directionality features along the lines suggested by Stabler for his DMGs.

### **4.6.4.2 Minimalist Grammars and Head-Driven Phrase Structure Grammar**

The notation for marking the head of a structure with '>' and '<' corresponds directly to the HPSG representation of heads. Since HPSG is a sign-based theory, information about all relevant linguistic levels is represented in descriptions (phonology, morphology, syntax, semantics, information structure). (49) gives an example: the lexical item for the word *grammar*.

$$\begin{array}{|c|c|c|}
\hline
\text{word} & & & \{\text{\textquotedbl{}gram\text{\textquotedbl{}}}\} & & \\
\text{\textquotedbl{}pm\text{\textquotedbl{}}} & & \{\text{\textquotedbl{}gram\text{\textquotedbl{}}}\} & & \\
& & \{\text{\textquotedbl{}cm\textquotedbl{}}} & \{\text{cat} & & \\
& \text{\textquotedbl{}NT\textquotedbl{}}\text{LOC} & \text{cat} & \{\text{\textquotedbl{}}\} & \text{\textquotedbl{}} \\
& & & \{\text{\textquotedbl{}}} & \{\text{\textquotedbl{}T\textquotedbl{}}\} & \\
& & & & \{\text{\textquotedbl{}}} & \{\text{\textquotedbl{}T\textquotedbl{}}\} & \\
& & & & & \\
& & & & \{\text{\textquotedbl{}}} & \{\text{N}\text{ST}\} & \\
& & & & & \\
& & & & & \\
\hline
\end{array}$$

The part of speech of *grammar* is *noun*. In order to form a complete phrase, it requires a determiner. This is represented by giving the spr feature the value ⟨ DET ⟩. Semantic information is listed under cont. For details see Chapter 9.

<sup>35</sup>Note however, that this transfer makes it necessary to select complex categories, a fact that I overlooked in Müller (2013c). The selection of simplex features vs. complex categories will be discussed in Section 4.6.5.

Since we are dealing with syntactic aspects exclusively, only a subset of the used features is relevant: valence information and information about part of speech and certain morphosyntactic properties that are relevant for the external distribution of a phrase is represented in a feature description under the path synsem|loc|cat. The features that are particularly interesting here are the so-called head features. Head features are shared between a lexical head and its maximal projection. The head features are located inside cat and are grouped together under the path head. Complex hierarchical structure is also modeled with feature value pairs. The constituents of a complex linguistic object are usually represented as parts of the representation of the complete object. For instance, there is a feature head-daughter the value of which is a feature structure that models a linguistic object that contains the head of a phrase. The Head Feature Principle (50) refers to this daughter and ensures that the head features of the head daughter are identical with the head features of the mother node, that is, they are identical to the head features of the complete object.

$$\text{(50)}\quad headed\text{-}phase\Rightarrow\begin{bmatrix} \text{sYSEM}|\text{LOC}|\text{CAT}|\text{HEAD}\,\boxed{} \\ \text{HEAD}\text{-}\text{DTR}|\text{SYNSEM}|\text{LOC}|\text{CAT}|\text{HEAD}\,\boxed{} \end{bmatrix}$$

Identity is represented by boxes with the same number.

Ginzburg & Sag (2000: 30) represent all daughters of a linguistic object in a list that is given as the value of the daughters attribute. The value of the feature head-daughter is identified with one of the elements of the daughters list:

$$\begin{array}{ll} \text{(51)} & \text{a. } \begin{bmatrix} \text{HEAD-DTR} & \boxed{1} \\ \text{DTRS} & \left< \boxed{\Box} \alpha, \beta \right> \end{bmatrix} \\\\ & \text{b. } \begin{bmatrix} \text{HEAD-DTR} & \boxed{1} \\ \text{DTRS} & \left< \alpha, \boxed{\Box} \beta \right> \end{bmatrix} \end{array}$$

 and are shorthands for descriptions of linguistic objects. The important point about the two descriptions in (51) is that the head daughter is identical to one of the two daughters, which is indicated by the 1 in front of in (51a) and in (51b). In the first feature description, the first daughter is the head and in the second description, the second daughter is the head. Because of the Head Feature Principle, the syntactic properties of the whole phrase are determined by the head daughter. That is, the syntactic properties of the head daughter correspond to the label in Chomsky's definition. This notation corresponds exactly to the one that is used by Stabler: (51a) is equivalent to (52a) and (51b) is equivalent to (52b).

$$\begin{array}{ccccc} \text{(52)} & \text{a.} & \text{ <} & \text{b.} & \text{b.} & \text{>} \\ & & \bigwedge\limits\_{a} & & & \bigwedge\limits\_{a} \\ & & a & \beta & & \\ \end{array}$$

An alternative structuring of this basic information, discussed by Pollard & Sag (1994: Chapter 9), uses the two features head-daughter and non-head-daughters rather

than head-daughter and daughters. This gives rise to feature descriptions like (53a), which corresponds directly to Chomsky's set-based representations, discussed in Section 4.6.2 and repeated here as (53b).

$$\begin{array}{ll} \text{(53)} & \text{a. } \begin{bmatrix} \text{HEAD-DTR} & \alpha\\ \text{NON-HEAD-DTRS} & \langle \beta \rangle \end{bmatrix} \\\\ & \text{b. } \{\alpha, \{\alpha, \beta\}\} \end{array}$$

The representation in (53a) does not contain information about linear precedence of and . Linear precedence of constituents is constrained by linear precedence rules, which are represented independently from constraints regarding (immediate) dominance.

The definition of Internal Merge in (43) corresponds to the Head-Filler Schema in HPSG (Pollard & Sag 1994: 164). Stabler's derivational rule deletes the subtree t<sup>2</sup> [−f]<sup>&</sup>gt; . HPSG is monotonic, that is, nothing is deleted in structures that are licensed by a grammar. Instead of deleting t<sup>2</sup> inside of a larger structure, structures containing an empty element (not a tree) are licensed directly.<sup>36</sup> Both in Stabler's definition and in the HPSG schema, t<sup>2</sup> is realized as filler in the structure. In Stabler's definition of Internal Merge, the category of the head daughter is not mentioned, but Pollard & Sag (1994: 164) restrict the head daughter to be a finite verbal projection. Chomsky (2007: 17) assumes that all operations but External Merge operate on phase level. Chomsky assumes that CP and v\*P are phases. If this constraint is incorporated into the definition in (43), the restrictions on the label of t<sup>1</sup> would have to be extended accordingly. In HPSG, sentences like (54) have been treated as VPs, not as CPs and hence Pollard & Sag's requirement that the head daughter in the Head Filler Schema be verbal corresponds to Chomsky's restriction.

### (54) Bagels, I like.

Hence, despite minor presentational differences, we may conclude that the formalization of Internal Merge and that of the Head-Filler Schema are very similar.

An important difference between HPSG and Stabler's definition is that 'movement' is not feature driven in HPSG. This is an important advantage since feature-driven movement cannot deal with instances of so-called altruistic movement (Fanselow 2003a), that is, movement of a constituent that happens in order to make room for another constituent in a certain position (see Section 4.6.1.4).

A further difference between general X theory and Stabler's formalization of Internal Merge on the one hand and HPSG on the other is that in the latter case there is no restriction regarding the completeness (or valence 'saturation') of the filler daughter. Whether the filler daughter has to be a maximal projection (English) or not (German),

<sup>36</sup>See Bouma, Malouf & Sag (2001) for a traceless analysis of extraction in HPSG and Müller (2023a: Chapter 7) and Chapter 19 of this book for a general discussion of empty elements.

follows from restrictions that are enforced locally when the trace is combined with its head. This makes it possible to analyze sentences like (55) without remnant movement.<sup>37</sup>

(55) Gelesen read hat has das the Buch book keiner nobody \_ \_ .

In contrast, Stabler is forced to assume an analysis like the one in (56b) (see also G. Müller (1998) for a remnant movement analysis). In a first step, *das Buch* is moved out of the VP (56a) and in a second step, the emptied VP is fronted as in (56b).

(56) a. Hat [das Buch] [keiner [VP \_ gelesen]]. b. [VP \_ Gelesen] hat [das Buch] [keiner \_ ].

Haider (1993: 281), De Kuthy & Meurers (2001: Section 2) and Fanselow (2002) showed that this kind of remnant movement analysis is problematic for German. The only phenomenon that Fanselow identified as requiring a remnant movement analysis is the problem of multiple fronting (see Müller (2003a) for an extensive discussion of relevant data). Müller (2005b,c, 2023a) develops an alternative analysis of these multiple frontings which uses an empty verbal head in the *Vorfeld*, but does not assume that adjuncts or arguments like *das Buch* in (56b) are extracted from the *Vorfeld* constituent. Instead of the remnant movement analysis, the mechanism of argument composition from Categorial Grammar (Geach 1970, Hinrichs & Nakazawa 1994a) is used to ensure the proper realization of arguments in the sentence. Chomsky (2007: 20) already uses argument composition as part of his analysis of TPs and CPs. Hence both remnant movement and argument composition are assumed in recent Minimalist proposals. The HPSG alternative, however, would appear to need less theoretical apparatus and hence has to be preferred for reasons of parsimony.

Finally, it should be mentioned that all transformational accounts have problems with Across the Board extraction like (57a) and (57b) in which one element corresponds to several gaps.

	- b. The man who [Mary loves \_ ] and [Sally hates \_ ] computed my tax.

This problem was solved for GPSG by Gazdar (1981b) and the solution carries over to HPSG. The Minimalist community tried to address these problems by introducing operations like sideward movement (Nunes 2004) where constituents can be inserted into sister trees. So in the example in (57a), *bagels* is copied from the object position of *hates* into the object position of *like* and then these two copies are related to the fronted element. Kobele criticized such solutions since they overgenerate massively and need

<sup>37</sup>See also Müller & Ørsnes (2013b) for an analysis of object shift in Danish that can account for verb fronting without remnant movement. The analysis does not have any of the problems that remnant movement analyses have.

<sup>38</sup>Pollard & Sag (1994: 205).

complicated filters. What he suggests instead is the introduction of a GPSG-style slash mechanism into Minimalist theories (Kobele 2008).

Furthermore, movement paradoxes (Bresnan 2001: Chapter 2) can be avoided by not sharing all information between filler and gap, a solution that is not available for transformational accounts, which usually assume identity of filler and gap or – as under the Copy Theory of Movement – assume that a derivation contains multiple copies of one object only one of which is spelled out. See also Borsley (2012) for further puzzles for, and problems of, movement-based approaches.

A further difference between MG and HPSG is that the Head-Filler Schema is not the only schema for analyzing long-distance dependencies. As was noted in footnote 12 on page 148, there is dislocation to the right (extraposition) as well as fronting. Although these should certainly be analyzed as long-distance dependencies, they differ from other long-distance dependencies in various respects (see Section 13.1.5). For analyses of extraposition in the HPSG framework, see Keller (1995), Bouma (1996), and Müller (1999b).

Apart from the schema for long-distance dependencies, there are, of course, other schemata in HPSG which are not present in MG or Minimalism. These are schemata which describe constructions without heads or are necessary to capture the distributional properties of parts of constructions, which cannot be easily captured in lexical analyses (e.g., the distribution of *wh*- and relative pronouns). See Section 21.10.

Chomsky (2010) has compared a Merge-based analysis of auxiliary inversion to a HPSG analysis and critiqued that the HPSG analysis uses ten schemata rather than one (Merge). Ginzburg & Sag (2000) distinguish three types of constructions with moved auxiliaries: inverted sentences such as those with fronted adverbial and with *wh*-questions (58a,b), inverted exclamatives (58c) and polar interrogatives (58d):

	- b. Whose book *are you reading*?
	- c. Am I tired!
	- d. Did Kim leave?

Fillmore (1999) captures various different usage contexts in his Construction Grammar analysis of auxiliary inversion and shows that there are semantic and pragmatic differences between the various contexts. Every theory must be able to account for these. Furthermore, one does not necessarily require ten schemata. It is possible to determine this – as Categorial Grammar does – in the lexical entry for the auxiliary or on an empty head (see Chapter 21 for a more general discussion of lexical and phrasal analyses). Regardless of this, every theory has to somehow account for these ten differences. If one wishes to argue that this has nothing to do with syntax, then somehow this has to be modeled in the semantic component. This means that there is no reason to prefer one theory over another at this point.

### **4.6.5 Selection of atomic features vs. selection of complex categories**

Berwick & Epstein (1995) pointed out that Minimalist theories are very similar to Categorial Grammar and I have discussed the similarities between Minimalist theories and

HPSG in Müller (2013c) and in the previous subsections. However, I overlooked one crucial difference between the usual assumptions about selection in Minimalist proposals on the one hand and Categorial Grammar, Dependency Grammar, LFG, HPSG, TAG, and Construction Grammar on the other hand: what is selected in the former type of theory is a single feature, while the latter theories select for feature bundles. This seems to be a small difference, but the consequences are rather severe. Stabler's definition of External Merge that was given on page 165 removes the selection feature (=f) and the corresponding feature of the selected element (f). In some publications and in the introduction in this book, the selection features are called uninterpretable features and are marked with a *u*. The uninterpretable features have to be checked and then they are removed from the linguistic object as in Stabler's definition. The fact that they have been checked is represented by striking them out. It is said that all uninterpretable features have to be checked before a syntactic object is sent to the interfaces (semantics and pronunciation). If uninterpretable features are not checked, the derivation crashes. Adger (2003: Section 3.6) explicitly discusses the consequences of these assumptions: a selecting head checks a feature of the selected object. It is not possible to check features of elements that are contained in the object that a head combines with. Only features at the topmost node, the so-called root node, can be checked with external merge. The only way features inside complex objects can be checked is by means of movement. This means that a head may not combine with a partially saturated linguistic object, that is, with a linguistic object that has an unchecked selection feature. I will discuss this design decision with reference to an example provided by Adger (2003: 95). The noun *letters* selects for a P and Ps select for an N. The analysis of (59a) is depicted left in Figure 4.23.

	- b. \* letters to

Figure 4.23: The analysis of *letters to Peter* according to Adger (2003: 95)

The string in (59b) is ruled out since the uninterpretable N feature of the preposition *to* is not checked. So this integrates the constraint that all dependent elements have to be maximal into the core mechanism. This makes it impossible to analyze examples like (60) in the most straightforward way, namely as involving a complex preposition and a noun that is lacking a determiner:

(60) vom from.the Bus bus

In theories in which complex descriptions can be used to describe dependants, the dependent may be partly saturated. So, for instance, in HPSG, fused prepositions like *vom* 'from.the' can select an N, which is a nominal projection lacking a specifier:

### (61) N[spr ⟨ Det ⟩]

The description in (61) is an abbreviation for an internally structured set of feature-value pairs (see Section 9.1.1). The example here is given for the illustration of the differences only, since there may be ways of accounting for such cases in a single-feature-Merge system. For instance, one could assume a DP analysis and have the complex preposition select a complete NP (something of category N with no uninterpretable features). Alternatively, one can assume that there is indeed a full PP with all the structure that is usually assumed and the fusion of preposition and determiner happens during pronunciation. The first suggestion eliminates the option of assuming an NP analysis as it was suggested by Bruening (2009) in the Minimalist framework.

Apart from this illustrative example with a fused preposition, there are other cases in which one may want to combine unsaturated linguistic objects. I already discussed coordination examples above. Another example is the verbal complex in languages like German, Dutch, and Japanese. Of course there are analyses of these languages that do not assume a verbal complex (G. Müller 1998, Wurmbrand 2003a), but these are not without problems. Some of the problems were discussed in the previous section as well.

Summing up this brief subsection, it has to be said that the feature checking mechanism that is built into the conception of Merge is more restrictive than the selection that is used in Categorial Grammar, Lexical Functional Grammar, HPSG, Construction Grammar, and TAG. In my opinion, it is too restrictive.

### **4.6.6 Summary**

In sum, one can say that the computational mechanisms of the Minimalist Program (e.g., transderivational constraints and labeling) as well as the theory of feature-driven movement are problematic and the assumption of empty functional categories is sometimes ad hoc. If one does not wish to assume that these categories are shared by all languages, then proposing two mechanisms (Merge and Move) does not represent a simplification of grammar since every single functional category which must be stipulated constitutes a complication of the entire system.

The labeling mechanism is not yet worked out in detail, does not account for the phenomena it was claimed to provide accounts for, and hence should be replaced by the head/functor-based labeling that is used in Categorial Grammar and HPSG.

# **4.7 Summary and classification**

This section is similar to Section 3.6. I first comment on language acquisition and then on formalization.

### **4.7.1 Explaining language acquisition**

Chomsky (2008: 135) counts theories in the MP as Principle & Parameter analyses and identifies MP parameters as being in the lexicon. Also, see Hornstein (2013: 396). UG is defined as possibly containing non-language-specific components, which are genetically determined (Chomsky 2007: 7). UG consists of unbounded Merge and the condition that expressions derived by a grammar must fulfill the restrictions imposed by the phonological and conceptual-intentional interfaces. In addition, a specific repertoire of features is assumed to be part of UG (Chomsky 2007: 6–7). The exact nature of these features has not been explained in detail and, as a result, the power of UG is somewhat vague. However, there is a fortunate convergence between various linguistic camps as Chomsky does not assume that the swathes of functional projections which we encountered in Section 4.6.1 also form part of UG (however, authors like Cinque & Rizzi (2010) do assume that a hierarchy of functional projections is part of UG). Since there are still parameters, the same arguments used against GB approaches to language acquisition that were mentioned in Section 3.6.1 are still relevant for theories of language acquisition in the Minimalist Program. See Chapter 16 for an in-depth discussion of approaches to language acquisition and the Principles & Parameters model as well as input-based approaches.

Chomsky's main goal in the Minimalist Program is to simplify the theoretical assumptions regarding formal properties of language and the computational mechanisms that are used so much as to make it plausible that they or relevant parts of them are part of our genetic endowment. But if we recapitulate what was assumed in this chapter, it is difficult to believe that Minimalist theories achieve this goal. To derive a simple sentence with an intransitive verb, one needs several empty heads and movements. Features can be strong or weak, Agree operates nonlocally in trees across several phrase boundaries. And in order to make correct predictions, it has to be made sure that Agree can only see the closest possible element (13)–(14). This is a huge machinery in comparison to a Categorial Grammar that just combines adjacent things. Categorial Grammars can be acquired from input (see Section 13.8.3), while it is really hard to imagine how the fact that there are features that trigger movement when they are strong, but do not trigger it when they are weak, should be acquired from data alone.

### **4.7.2 Formalization**

Section 3.6.2 commented on the lack of formalization in transformational grammar up until the 1990s. The general attitude towards formalization did not change in the minimalist era and hence there are very few formalizations and implementations of Minimalist theories.

Stabler (2001) shows how it is possible to formalize and implement Kayne's theory of remnant movement. In Stabler's implementation39, there are no transderivational con-

<sup>39</sup>His system is available at: http://linguistics.ucla.edu/people/stabler/coding.html. 2020-07-16.

straints, no numerations40, he does not assume Agree (see Fong 2014: 132) etc. The following is also true of Stabler's implementation of Minimalist Grammars and GB systems: there are no large grammars. Stabler's grammars are small, meant as a proof of concept and purely syntactic. There is no morphology41, no treatment of multiple agreement (Stabler 2011b: Section 27.4.3) and above all no semantics. PF and LF processes are not modeled.<sup>42</sup> The grammars and the computational system developed by Sandiway Fong are of similar size and faithfulness to the theory (Fong & Ginsburg 2012, Fong 2014): the grammar fragments are small, encode syntactic aspects such as labeling directly in the phrase structure (Fong & Ginsburg 2012: Section 4) and, therefore, fall behind X theory. Furthermore, they do not contain any morphology. Spell-Out is not implemented, so in the end it is not possible to parse or generate any utterances.<sup>43</sup> Herring's (2016) dissertation is a promising beginning. Herring developed a system that can be used for grammar

'The man who claimed Maria laughed is standing next to the palm tree that was planted last year.'

<sup>41</sup>The test sentences have the form as in (i).

	- b. the king have -s eat -en
	- c. the king be -s eat -ing
	- d. the king -s will -s have been eat -ing the pie

<sup>42</sup>See Sauerland & Elbourne (2002) for suggestions of PF and LF-movement and the deletion of parts of copies (p. 285). The implementation of this would be far from trivial.

<sup>43</sup>The claim by Berwick, Pietroski, Yankama & Chomsky (2011: 1221) in reference to Fong's work is just plain wrong: *But since we have sometimes adverted to computational considerations, as with the ability to "check" features of a head/label, this raises a legitimate concern about whether our framework is computationally realizable. So it is worth noting that the copy conception of movement, along with the locally oriented "search and labeling" procedure described above, can be implemented computationally as an efficient parser; see Fong, 2011, for details.* If one has a piece of software which cannot parse a single sentence, then one cannot claim that it is efficient since one does not know whether the missing parts of the program could make it extremely inefficient. Furthermore, one cannot compare the software to other programs. As has already been discussed, labeling is not carried out by Fong as was described in Chomsky's work, but instead he uses a phrase structure grammar of the kind described in Chapter 2.

<sup>40</sup>There is a numeration lexicon in Veenstra (1998: Chapter 9). This lexicon consists of a set of numerations, which contain functional heads, which can be used in sentences of a certain kind. For example, Veenstra assumes numerations for sentences with bivalent verbs and subjects in initial position, for embeded sentences with monovalent verbs, for *wh*-questions with monovalent verbs, and for polar interrogatives with monovalent verbs. An element from this set of numerations corresponds to a particular configuration and a phrasal construction in the spirit of Construction Grammar. Veenstra's analysis is not a formalization of the concept of the numeration that one finds in Minimalist works. Normally, it is assumed that a numeration contains all the lexical entries which are needed for the derivation of a sentence. As (i) shows, complex sentences can consist of combinations of sentences with various different sentence types:

<sup>(</sup>i) Der the Mann, man der who behauptet claimed hat, has dass that Maria Maria gelacht laughed hat, has steht stands neben next.to der the Palme, palm.tree die which im in.the letzten last Jahr year gepflanzt planted wurde. aux

In (i), there are two relative clauses with verbs of differing valence, an embedded sentence with a monovalent verb and the matrix clause. Under a traditional understanding of numerations, Veenstra would have to assume an infinite numeration lexicon containing all possible combinations of sentence types.

development in the Minimalist Program. In the version described in his thesis the system could generate but was unable to parse sentences (p. 138, 143). PF phenomena were not modeled (p. 142–143) and the two example fragments are small and come without a semantics (p. 143).

The benchmark here has been set by implementations of grammars in constraintbased theories; for example, the HPSG grammars of German (Müller & Kasper 2000), English (Flickinger, Copestake & Sag 2000) and Japanese (Siegel 2000) that were developed in the 90s as part of Verb*mobil* (Wahlster 2000) for the analysis of spoken language or the LFG or CCG systems with large coverage. These grammars can analyze up to 83 % of utterances in spoken language (for Verb*mobil* from the domains of appointment scheduling and trip planning) or written language. Linguistic knowledge is used to generate and analyze linguistic structures. In one direction, one arrives at a semantic representation of a string of words and in the other one can create a string of words from a given semantic representation. A morphological analysis is indispensable for analyzing naturally occurring data from languages with elaborated morphological marking systems. In the remainder of this book, the grammars and computational systems developed in other theories will be discussed at the beginning of the respective chapters.

The reason for the lack of larger fragments inside of GB/MP could have to do with the fact that the basic assumptions of the Minimalist community change relatively quickly:

In Minimalism, the triggering head is often called a *probe*, the moving element is called a *goal*, and there are various proposals about the relations among the features that trigger syntactic effects. Chomsky (1995b: p. 229) begins with the assumption that features represent requirements which are checked and deleted when the requirement is met. The first assumption is modified almost immediately so that only a proper subset of the features, namely the 'formal', 'uninterpretable' features are deleted by checking operations in a successful derivation (Collins 1997; Chomsky 1995b: §4.5). Another idea is that certain features, in particular the features of certain functional categories, may be initially unvalued, becoming valued by entering into appropriate structural configurations with other elements (Chomsky 2008; Hiraiwa 2005). And some recent work adopts the view that features are never deleted (Chomsky 2007: p. 11). These issues remain unsolved. (Stabler 2011a: 397)

In order to fully develop a grammar fragment, one needs at least three years (compare the time span between the publication of *Barriers* (1986) and Stabler's implementation (1992)). Particularly large grammars require the knowledge of several researchers working in international cooperation over the space of years or even decades. This process is disrupted if fundamental assumptions are repeatedly changed at short intervals.

As far as large-scale coverage is concerned, the more recent work by John Torr is an exception to what was said above.<sup>44</sup> Torr, Stanojevic, Steedman & Cohen (2019) state that their parser is the first one to take up the Sproat & Lappin Challenge to the Minimalist community (2005). The work of the authors is impressive and they really implemented

<sup>44</sup>It is not an exception as far as theory development is concerned. Torr's system is based on Chomsky (1995b), so he did not follow new trends but stayed within a certain setting.

a wide-coverage statistically trained parser based on Transformational Grammar, but what they did is different from standard Minimalism since they assume "around 45" versions of Move and Merge (p. 2488) in comparison to the two versions usually assumed in Minimalism (Move and Merge or Internal and External Merge).<sup>45</sup> Torr & Stabler (2016) explain some of the schemata that are assumed: there are versions of Merge that combine a head with a complement and versions that combine a head with a specifier (see Müller (2013c) and Section 4.6.4.2 above for a comparison of Minimalist Grammars with HPSG. The respective variants of Merge correspond to HPSG's Specifier-Head Schema and the Head-Complement Schema, respectively). Torr & Stabler (2016: 4) assume four schemata for adjunction (HPSG has one such schema and use underspecification with respect to order, see Müller (2021a: Section 2)). They assume a special rule for rightward movement (p. 5) corresponding to Keller's (1995) and Müller's (1999b) Head-Extra Schema for extraposition. In addition the authors assume two schemata for head movement. HPSG assumes a lexical rule or a unary branching schema applying to words or coordinations of words (Müller 2023a, 2021a: Section 5.1)). Across the Board extraction (Ross 1967: Section 4.2.4.1) is taken care of by four special schemata. See Abeillé & Chaves (2021) for an overview of treatments of coordination in HPSG. The treatment of Across the Board Extraction is non-standard Minimalism. For the analysis of examples like (62) in which one filler corresponds to two gaps in two conjuncts, the authors build on Kobele (2008) who uses a slash passing mechanism going back to Sag (1983) and Gazdar (1981b). While Kobele assumes the slash passing mechanism of GPSG, Torr & Stabler (2016) suggest an analyis of (62) with two instances of *who* in object positions, which are later unified into one when the second conjunct is merged into the main structure.

(62) Who did Jack say Mary likes \_ and Pete hates \_ ?

An interesting property of the analysis is that *who Pete hates* forms a discontinuous constituent: *who* is combined with *hates* despite its sentence-initial position. Information about this *wh*-element is passed up the tree in a GPSG-style way. The difference is that there is no trace, but the extracted element is identical in phonological material with the filler. Interestingly, there is an HPSG variant of nonlocal dependencies that is very similar to what Torr & Stabler (2016) suggest and together with a modified Filler-Head Schema the analyses are parallel: Hinrichs & Nakazawa (1994b) suggested that the linguistic objects that are involved in nonlocal dependencies are of type *sign* rather than *local*. This makes it possible to pass up information about a daughter including its phonological make up. If one assumes a version of HPSG permitting discontinuous constituents (Reape 1994, Kathol 2001, Müller 1995, 2004d and Section 11.7.2.2 of this book) and a Filler-Head Schema that requires that the phonology of the filler is identical to the phonology in the slash list and that does not insert the fronted element into the constituent order domain of the head (since it is in there already), we get an analysis of the type described in Torr & Stabler (2016). Figure 4.24 shows the analysis that was suggested by Torr &

<sup>45</sup>Torr explained in p.c. 2019 that these 45 rules can be folded into two Merge functions and two Move functions. But in the end this is just a clever way of hiding complexity. It is like Chomsky (2005: 12) revising the theory with Move and Merge into one with just one operation Merge but assuming two subcases of Internal and External Merge.

Stabler (2016) and Figure 4.25 the HPSG analog. Directional Minimalist Grammars use

### Figure 4.24: Derivation tree of *who Jack likes* in Directional Minimalist Grammar according to Torr & Stabler (2016)

the '=' sign to indicate the direction in which an argument is required. =d means that a DP is required to the left of a head and d= encodes the requirement of a DP to the right. This is like the '/' notation of Categorial Grammar (see Chapter 8). *likes* has the category d= =d v, which means that it is a verb requiring a d to its right (the object) and a d to its left (the subject). *who* is of category d and has a −wh feature, something that has to be checked for a derivation to be complete. *Jack* is the subject of *likes* and fulfills the =d requirement of *likes*. Items like [pres] and [int] are empty elements. [pres] has a +case feature and can make *Jack* move to its specifier. The movement consumes the −case feature and puts *Jack* to the front of the string. This looks like a unary projection in the derivation tree. The empty interrogative head [int] selects for a t to its right. The result is a C projection that has a +wh feature. In the final step *who*, which is −wh, moves to the left and the wh features are removed. The important thing is that the information about the phonology of *who* and its wh feature is percolated up in the tree until it is finally bound off in the last derivation step.

Figure 4.25 shows the HPSG analog. The information about the local properties of the *wh* word including its phonology are passed up in the tree until they are bound off in a filler-head configuration. The Filler-Head Schema binds off the nonlocal dependency and makes sure that the phonology of the filler is not realized twice (see Reape 1994, Müller 2021a: Section 6 on linearization domains and Abeillé & Chaves 2021: Section 7 on multi-dominance approaches in HPSG). An alternative to a binary branching Filler-Head Schema would be a unary branching rule that binds off the element in slash and adds

the stored phonology to the phonology of the daughter. This would then be completely parallel to the unary branching assumed in Torr's Directional Minimalist Grammar.

Concluding the discussion of Torr's work, it can be said that it is truly impressive but that it shows a convergence between Minimalism (or rather Minimalist Grammar) and HPSG. Tools from GPSG/HPSG were adopted and the outcome differs in crucial aspects from what is taught in Minimalist textbooks (just one or two instances of Merge vs. 45, transformations vs. GPSG-style percolation of features).

### **Further reading**

This chapter heavily draws on Adger (2003). Other textbooks on Minimalism are Radford (1997), Grewendorf (2002), and Hornstein, Nunes & Grohmann (2005).

Kuhn (2007) offers a comparison of modern derivational analyses with constraint-based LFG and HPSG approaches. Borsley (2012) contrasts analyses of long-distance dependencies in HPSG with movement-based analyses as in GB/ Minimalism. Borsley discusses four types of data which are problematic for movement-based approaches: extraction without fillers, extraction with multiple gaps, extractions where fillers and gaps do not match and extraction without gaps. Borsley & Müller (2021) is another comparison of Minimalism and HPSG. The authors discuss differences of approach and outlook of the two frameworks (formalization and exhaustivity), empirical quality of the work, differences in assumed syntactic structures, psycholinguistic issues and the assumptions made in the frameworks regarding language acquisition.

The discussion of labeling, abandonment of X theory and a comparison be-

tween Stabler's Minimalist Grammars and HPSG from Sections 4.6.2–4.6.4 can be found in Müller (2013c).

*Intonational Phrasing, Discontinuity, and the Scope of Negation* by Błaszczak & Gärtner (2005) is recommended for the more advanced reader. The authors compare analyses of negated quantifiers with wide scope in the framework of Minimalism (following Kayne) as well as Categorial Grammar (following Steedman).

Sternefeld (2006) is a good, detailed introduction to syntax (839 pages) which develops a Transformational Grammar analysis of German which (modulo transformations) almost matches what is assumed in HPSG (feature descriptions for arguments ordered in a valence list according to a hierarchy). Sternefeld's structures are minimal since he does not assume any functional projections if they cannot be motivated for the language under discussion. Sternefeld is critical regarding certain aspects which some other analyses take for granted. Sternefeld views his book explicitly as a textbook from which one can learn how to argue coherently when creating theories. For this reason, this book is not just recommended for students and PhD students.

Sternefeld & Richter (2012) discuss the situation in theoretical linguistics with particular focus on the theories described in this and the previous chapter. I can certainly understand the frustration of the authors with regard to the vagueness of analyses, argumentation style, empirical base of research, rhetorical clichés, immunization attempts and general respect for scientific standards: a recent example of this is the article *Problems of Projection* by Chomsky (2013).*<sup>a</sup>* I, however, do not share the general, pessimistic tone of this article. In my opinion, the patient's condition is critical, but he is not dead yet. As a reviewer of the Sternefeld and Richter paper pointed out, the situation in linguistics has changed so much that now having a dissertation from MIT does not necessarily guarantee you a position (footnote 16) later on. One could view a reorientation of certain scientists with regard to certain empirical questions, adequate handling of data (Fanselow 2004b; 2009: 137) and improved communication between theoretical camps as a way out of this crisis.

Since the 90s, it is possible to identify an increased empirical focus (especially in Germany), which manifests itself, for example, in the work of linguistic Collaborative Research Centers (SFBs) or the yearly *Linguistic Evidence* conference. As noted by the reviewer cited above, in the future, it will not be enough to focus on Chomsky's problems in determining the syntactic categories in sentences such as *He left* (see Section 4.6.2). Linguistic dissertations will have to have an empirical section, which shows that the author actually understands something about language. Furthermore, dissertations, and of course other publications, should give an indication that the author has not just considered theories from a particular framework but is also aware of the broad range of relevant descriptive and theoretical literature.

As I have shown in Section 4.6.4 and in Müller (2013c) and will also show in the following chapters and the discussion chapters in particular, there are most certainly similarities between the various analyses on the market and they do converge in certain respects. The way of getting out of the current crisis lies with the empirically-grounded and theoretically broad education and training of following generations.

In short, both teachers and students should read the medical record by Sternefeld and Richter. I implore the students not to abandon their studies straight after reading it, but rather to postpone this decision at least until after they have read the remaining chapters of this book.

*<sup>a</sup>*Vagueness: in this article, *perhaps* occurs 19 times, *may* 17 as well as various *if* s. Consistency: the assumptions made are inconsistent. See footnote 18 on page 156 of this book. Argumentation style: the term specifier is abolished and it is claimed that the problems associated with this term can no longer be formulated. Therefore, they are now not of this world. See footnote 28 on page 161 of this book. Immunization: Chomsky writes the following regarding the Empty Category Principle: *apparent exceptions do not call for abandoning the generalization as far as it reaches, but for seeking deeper reasons to explain where and why it holds* (p. 9). This claim is most certainly correct, but one wonders how much evidence one needs in a specific case in order to disregard a given analysis. In particular regarding the essay *Problems of Projection*, one has to wonder why this essay was even published only five years after *On phases*. The evidence against the original approach is overwhelming and several points are taken up by Chomsky (2013) himself. If Chomsky were to apply his own standards (for a quote of his from 1957, see page 6) as well as general scientific methods (Occam's Razor), the consequence would surely be a return to head-based analyses of labeling.

For detailed comments on this essay, see Sections 4.6.2 and 4.6.3.

# **5 Generalized Phrase Structure Grammar**

Generalized Phrase Structure Grammar (GPSG) was developed as an answer to Transformational Grammar at the end of the 1970s. The book by Gazdar, Klein, Pullum & Sag (1985) is the main publication in this framework. Hans Uszkoreit has developed a largish GPSG fragment for German (1987). Analyses in GPSG were so precise that it was possible to use them as the basis for computational implementations. The following is a possibly incomplete list of languages with implemented GPSG fragments:


As was discussed in Section 3.1.1, Chomsky (1957) argued that simple phrase structure grammars are not well-suited to describe relations between linguistic structures and claimed that one needs transformations to explain them. These assumptions remained unchallenged for two decades (with the exception of publications by Harman (1963) and Freidin (1975)) until alternative theories such as LFG and GPSG emerged, which addressed Chomsky's criticisms and developed non-transformational explanations of phenomena for which there were previously only transformational analyses or simply none at all. The analysis of local reordering of arguments, passives and long-distance dependencies are some of the most important phenomena that have been discussed in this framework. Following some introductory remarks on the representational format of GPSG in Section 5.1, I will present the GPSG analyses of these phenomena in some more detail.

# **5.1 General remarks on the representational format**

This section has five parts. The general assumptions regarding features and the representation of complex categories is explained in Section 5.1.1, the assumptions regarding the linearization of daughters in a phrase structure rule is explained in Section 5.1.2. Section 5.1.3 introduces metarules, Section 5.1.4 deals with semantics, and Section 5.1.5 with adjuncts.

## **5.1.1 Complex categories, the Head Feature Convention, and X rules**

In Section 2.2, we augmented our phrase structure grammars with features. GPSG goes one step further and describes categories as sets of feature-value pairs. The category in (1a) can be represented as in (1b):

	- b. { cat n, bar 2, per 3, num sg, case nom }

It is clear that (1b) corresponds to (1a). (1a) differs from (1b) with regard to the fact that the information about part of speech and the X level (in the symbol NP) are prominent, whereas in (1b) these are treated just like the information about case, number or person.

Lexical entries have a feature subcat. The value is a number which says something about the kind of grammatical rules in which the word can be used. (2) shows examples for grammatical rules and lists some verbs which can occur in these rules.<sup>1</sup>


These rules license VPs, that is, the combination of a verb with its complements, but not with its subject. The numbers following the category symbols (V or N) indicate the X projection level. For Uszkoreit, the maximum number of projections of a verbal projection is three rather than two as is often assumed.

The H on the right side of the rule stands for *head*. The *Head Feature Convention* (HFC) ensures that certain features of the mother node are also present on the node marked with H (for details see Gazdar, Klein, Pullum & Sag 1985: Section 5.4 and Uszkoreit 1987: 67):

### **Principle 1 (Head Feature Convention)**

The mother node and the head daughter must bear the same head features unless indicated otherwise.

In (2), examples for verbs which can be used in the rules are given in brackets. As with ordinary phrase structure grammars, one also requires corresponding lexical entries for verbs in GPSG. Two examples are provided in (3):

(3) V[5, vform *inf* ] → einzuschlafen V[6, vform *inf* ] → aufzuessen

The first rule states that *einzuschlafen* 'to fall asleep' has a subcat value of 5 and the second indicates that *aufzuessen* 'to finish eating' has a subcat value of 6. It follows, then, that *einzuschlafen* can only be used in the first rule (2) and *aufzuessen* can only be

<sup>1</sup> The analyses discussed in the following are taken from Uszkoreit (1987).

used in the second. Furthermore, (3) contains information about the form of the verb (*inf* stands for infinitives with *zu* 'to').

If we analyze the sentence in (4) with the second rule in (2) and the second rule in (3), then we arrive at the structure in Figure 5.1.

(4) Karl Karl hat has versucht, tried [den the Kuchen cake aufzuessen]. to.eat.up 'Karl tried to finish eating the cake.'

Figure 5.1: Projection of head features in GPSG

The rules in (2) say nothing about the order of the daughters which is why the verb (H[6]) can also be in final position. This aspect will be discussed in more detail in Section 5.1.2. With regard to the HFC, it is important to bear in mind that information about the infinitive verb form is also present on the mother node. Unlike simple phrase structure rules such as those discussed in Chapter 2, this follows automatically from the Head Feature Convention in GPSG. In (3), the value of vform is given and the HFC ensures that the corresponding information is represented on the mother node when the rules in (2) are applied. For the phrase in (4), we arrive at the category V2[vform *inf* ] and this ensures that this phrase only occurs in the contexts it is supposed to:

	- b. \* [Den the Kuchen cake aufzuessen] to.eat.up darf is.allowed.to er he nicht. not Intended: 'He is not allowed to finish eating the cake.'
	- c. \* [Den the Kuchen cake aufessen] eat.up hat has er he nicht not gewagt. dared Intended: 'He did not dare to finish eating the cake.'
	- d. [Den the Kuchen cake aufessen] eat.up darf is.allowed.to er he nicht. not 'He is not allowed to finish eating the cake.'

*gewagt* 'dared' selects for a verb or verb phrase with an infinitive with *zu* 'to' but not a bare infinitive, while *darf* 'is allowed to' takes a bare infinitive.

This works in an analogous way for noun phrases: there are rules for nouns which do not take an argument as well as for nouns with certain arguments. Examples of rules for nouns which either require no argument or two PPs are given in (6) (Gazdar, Klein, Pullum & Sag 1985: 127):


The rule for the combination of N and a determiner is as follows:

$$(7)\quad \text{N2} \to \text{Det}, \text{H1}$$

N2 stands for NP, that is, for a projection of a noun phrase on bar level two, whereas H1 stands for a projection of the head daughter on the bar level one. The Head Feature Convention ensures that the head daughter is also a nominal projection, since all features on the head daughter apart from the X level are identified with those of the whole NP. When analyzing (8), the second rule in (6) licenses the N *Gesprächs mit Maria über Klaus*. The fact that *Gesprächs* 'conversation' is in the genitive is represented in the lexical item of *Gesprächs* and since *Gesprächs* is the head, it is also present at N, following the Head Feature Convention.

(8) des the.gen Gespräch-s conversation-gen mit with Maria Maria über about Klaus Klaus 'the conversation with Maria about Klaus'

For the combination of N with the determiner, we apply the rule in (7). The category of the head determines the word class of the element on the left-hand side of the rule, which is why the rule in (7) corresponds to the classical X rules that we encountered in (65c) on page 74. Since *Gesprächs mit Maria über Klaus* is the head daughter, the information about the genitive of N is also present at the NP node.

## **5.1.2 Local reordering**

The first phenomenon to be discussed is local reordering of arguments. As was already discussed in Section 3.5, arguments in the middle field can occur in an almost arbitrary order. (9) gives some examples:

	- b. [weil] because der the Mann man das the Buch book dem the Kind child gibt gives
	- c. [weil] because das the Buch book der the Mann man dem the Kind child gibt gives

In the phrase structure grammars in Chapter 2, we used features to ensure that verbs occur with the correct number of arguments. The following rule in (10) was used for the sentence in (9a):

(10) S → NP[nom] NP[dat] NP[acc] V\_nom\_dat\_acc

If one wishes to analyze the other orders in (9), then one requires an additional five rules, that is, six in total:

$$\begin{array}{ll} \text{(11)} \quad \text{S} \rightarrow \text{NP[nom] NP[data]} & \text{NP[acc]} \quad \text{V\\_nom\\_data\\_acc} \\ \text{S} \rightarrow \text{NP[nom] NP[acc]} & \text{NP[data]} \quad \text{V\\_nom\\_data\\_acc} \\ \text{S} \rightarrow \text{NP[acc]} & \text{NP[nom] NP[data]} \quad \text{V\\_nom\\_data\\_acc} \\ \text{S} \rightarrow \text{NP[acc]} & \text{NP[nom] NP[acc]} \quad \text{V\\_nom\\_data\\_acc} \\ \text{S} \rightarrow \text{NP[data]} & \text{NP[acc]} \quad \text{NP[nom]} \text{ V\\_nom\\_data\\_acc} \\ \end{array}$$

In addition, it is necessary to postulate another six rules for the orders with verb-initial order:

(12)  $\mathcal{S} \to \mathcal{V}\_{\rm non\\_data\\_acc\NP[nom] NP[data]}$   $\text{NP[acc]}$   $\mathcal{S} \to \mathcal{V}\_{\rm non\\_data\\_acc\NP[nom] NP[acc]}$   $\text{NP[data]}$   $\mathcal{S} \to \mathcal{V}\_{\rm non\\_data\\_acc\NP[acc]}$   $\text{NP[nom] NP[data]}$   $\mathcal{S} \to \mathcal{V}\_{\rm non\\_data\\_acc\NP[acc]}$   $\text{NP[data]}$   $\text{NP[nom]}$   $\mathcal{S} \to \mathcal{V}\_{\rm non\\_data\\_acc\NP[data]}$   $\text{NP[acc]}$   $\text{NP[nom]}$ 

Furthermore, one would also need parallel rules for transitive and intransitive verbs with all possible valences. Obviously, the commonalities of these rules and the generalizations regarding them are not captured. The point is that we have the same number of arguments, they can be realized in any order and the verb can be placed in initial or final position. As linguists, we find it desirable to capture this property of the German language and represent it beyond phrase structure rules. In Transformational Grammar, the relationship between the orders is captured by means of movement: the deep structure corresponds to verb-final order with a certain order of arguments and the surface order is derived by means of Move-. Since GPSG is a non-transformational theory, this kind of explanation is not possible. Instead, GPSG imposes restrictions on *immediate dominance* (ID), which differ from those which refer to *linear precedence* (LP): rules such as (13) are to be understood as dominance rules, which do not have anything to say about the order of the daughters (Pullum 1982).

(13) S → V, NP[nom], NP[dat], NP[acc]

The rule in (13) simply states that S dominates all other nodes. Due to the abandonment of ordering restrictions for the right-hand side of the rule, we only need one rule rather than twelve.

Nevertheless, without any kind of restrictions on the right-hand side of the rule, there would be far too much freedom. For example, the following order would be permissible:

(14) \* Dem the.dat Kind child der the.nom Mann man gibt gives ein the.acc Buch. book

Such orders are ruled out by so-called *Linear Precedence Rules* or LP-rules. LP-constraints are restrictions on local trees, that is, trees with a depth of one. It is, for example, possible to state something about the order of V, NP[nom], NP[acc] and NP[dat] in Figure 5.2 using linearization rules.

Figure 5.2: Example of a local tree

The following linearization rules serve to exclude orders such as those in (14):

(15) V[+mc] < X X < V[−mc]

mc stands for *main clause*. The LP-rules ensure that in main clauses (+mc), the verb precedes all other constituents and follows them in subordinate clauses (−mc). There is a restriction that says that all verbs with the mc-value '+' also have to be (+fin). This will rule out infinitive forms in initial position.

These LP rules do not permit orders with an occupied prefield or postfield in a local tree. This is intended. We will see how fronting can be accounted for in Section 5.4.

### **5.1.3 Metarules**

We have previously encountered linearization rules for sentences with subjects, however our rules have the form in (16), that is, they do not include subjects:

(16) V2 → H[7], N2[case dat] V2 → H[8], N2[case dat], N2[case acc]

These rules can be used to analyze the verb phrases *dem Mann das Buch zu geben* 'to give the man the book' and *das Buch dem Mann zu geben* 'to give the book to the man' as they appear in (17), but we cannot analyze sentences like (9), since the subject does not occur on the right-hand side of the rules in (16).

	- b. Er he verspricht, promises [das the.acc Buch book dem the.dat Mann man zu to geben]. give 'He promises to give the book to the man.'

A rule with the format of (18) does not make much sense for a GPSG analysis of German since it cannot derive all the orders in (9) as the subject can occur between the elements of the VP as in (9c).

(18) S → N2 V2

With the rule in (18), it is possible to analyze (9a) as in Figure 5.3 and it would also be possible to analyze (9b) with a different ordering of the NPs inside the VP. The remaining examples in (9) cannot be captured by the rule in (18), however. This has to do with the

Figure 5.3: VP analysis for German (not appropriate in the GPSG framework)

fact that only elements in the same local tree, that is, elements which occur on the righthand side of a rule, can be reordered. While we can reorder the parts of the VP and thereby derive (9b), it is not possible to place the subject at a lower position between the objects. Instead, a metarule can be used to analyze sentences where the subject occurs between other arguments of the verb. This rule relates phrase structure rules to other phrase structure rules. A metarule can be understood as a kind of instruction that creates another rule for each rule with a certain form and these newly created rules will in turn license local trees.

For the example at hand, we can formulate a metarule which says the following: if there is a rule with the form "V2 consists of something" in the grammar, then there also has to be another rule "V3 consists of whatever V2 consists + an NP in the nominative". In formal terms, this looks as follows:

(19) V2 → W ↦→ V3 → W, N2[case nom]

#### 5 Generalized Phrase Structure Grammar

W is a variable which stands for an arbitrary number of categories (W = *whatever*). The metarule creates the following rules in (20) from the rules in (16):

(20) V3 → H[7], N2[case dat], N2[case nom] V3 → H[8], N2[case dat], N2[case acc], N2[case nom]

Now, the subject and other arguments both occur in the right-hand side of the rule and can therefore be freely ordered as long as no LP rules are violated.

## **5.1.4 Semantics**

The semantics adopted by Gazdar, Klein, Pullum & Sag (1985: Chapter 9–10) goes back to Richard Montague (1974). Unlike a semantic theory which stipulates the combinatorial possibilities for each rule (see Section 2.3), GPSG uses more general rules. This is possible due to the fact that the expressions to be combined each have a semantic type. It is customary to distinguish between entities (*e*) and truth values (*t*). Entities refer to an object in the world (or in a possible world), whereas entire sentences are either true or false, that is, they have a truth value. It is possible to create more complex types from the types *e* and *t*. Generally, the following holds: if *a* and *b* are types, then ⟨ *a*, *b* ⟩ is also a type. Examples of complex types are ⟨ *e*, *t* ⟩ and ⟨ *e*, ⟨ *e*, *t* ⟩⟩. We can define the following combinatorial rule for this kind of typed expressions:

(21) If is of type ⟨ *b*, *a* ⟩ and of type *b*, then () is of type *a*.

This type of combination is also called *functional application*. With the rule in (21), it is possible that the type ⟨ *e*, ⟨ *e*, *t* ⟩⟩ corresponds to an expression which still has to be combined with two expressions of type *e* in order to result in an expression of *t*. The first combination step with *e* will yield ⟨ *e*, *t* ⟩ and the second step of combination with a further *e* will give us *t*. This is similar to what we saw with -expressions on page 62: *like*′ (x, y) has to combine with a y and an x. The result in this example was *like*′ (*max*′ , *lotte*′ ), that is, an expression that is either true or false in the relevant world.

In Gazdar et al. (1985), an additional type is assumed for worlds in which an expression is true or false. For reasons of simplicity, I will omit this here. The types that we need for sentences, NPs and N′ s, determiners and VPs are given in (22):

	- b. TYP(NP) = ⟨ ⟨ *e*, *t* ⟩, *t* ⟩
	- c. TYP(N′ ) = ⟨ *e*, *t* ⟩
	- d. TYP(Det) = TYP(N′ ), TYP(NP)
	- e. TYP(VP) = ⟨ *e*, *t* ⟩

A sentence is of type *t* since it is either true or false. A VP needs an expression of type *e* to yield a sentence of type *t*. The type of the NP may seem strange at first glance, however, it is possible to understand it if one considers the meaning of NPs with quantifiers. For sentences such as (23a), a representation such as (23b) is normally assumed:

	- b. ∀ *child*′ (x) → *laugh*′ (x)

The symbol ∀ stands for the universal quantifier. The formula can be read as follows. For every object, for which it is the case that it has the property of being a child, it is also the case that it is laughing. If we consider the contribution made by the NP, then we see that the universal quantifier, the restriction to children and the logical implication come from the NP:

#### (24) ∀ *child*′ (x) → P(x)

This means that an NP is something that must be combined with an expression which has exactly one open slot corresponding to the x in (24). This is formulated in (22b): an NP corresponds to a semantic expression which needs something of type ⟨ *e*, *t* ⟩ to form an expression which is either true or false (that is, of type *t*).

An N′ stands for a nominal expression of the kind x child(x). This means if there is a specific individual which one can insert in place of the x, then we arrive at an expression that is either true or false. For a given situation, it is the case that either John has the property of being a child or he does not. An N′ has the same type as a VP.

TYP(N′ ) and TYP(NP) in (22d) stand for the types given in (22c) and (22b), that is, a determiner is semantically something which has to be combined with the meaning of N′ to give the meaning of an NP.

Gazdar, Klein, Pullum & Sag (1985: 209) point out a redundancy in the semantic specification of grammars which follow the rule-to-rule hypothesis (see Section 2.3) since, instead of giving rule-by-rule instructions with regard to combinations, it suffices in many cases simply to say that the functor is applied to the argument. If we use types such as those in (22), it is also clear which constituent is the functor and which is the argument. In this way, a noun cannot be applied to a determiner, but rather only the reverse is possible. The combination in (25a) yields a well-formed result, whereas (25b) is ruled out.

(25) a. Det′ (N′ ) b. N′ (Det′ )

The general combinatorial principle is then as follows:

(26) Use functional application for the combination of the semantic contribution of the daughters to yield a well-formed expression corresponding to the type of the mother node.

The authors of the GPSG book assume that this principle can be applied to the vast majority of GPSG rules so that only a few special cases have to be dealt with by explicit rules.

### **5.1.5 Adjuncts**

For nominal structures in English, Gazdar et al. (1985: 126) assume the X analysis and, as we have seen in Section 2.4.1, this analysis is applicable to nominal structures in German. Nevertheless, there is a problem regarding the treatment of adjuncts in the verbal domain if one assumes flat branching structures, since adjuncts can freely occur between arguments:

	- b. [weil] because der the Mann man dem the Kind child *gestern* yesterday das the Buch book gab gave
	- c. [weil] because der the Mann man *gestern* yesterday dem the Kind child das the Buch book gab gave
	- d. [weil] because *gestern* yesterday der the Mann man dem the Kind child das the Buch book gab gave

For (27), one requires the following rule:

(28) V3 → H[8], N2[case dat], N2[case acc], N2[case nom], AdvP

Of course, adjuncts can also occur between the arguments of verbs from other valence classes:

(29) [weil] because (oft) often die the Frau woman (oft) often dem the Mann man (oft) often hilft helps 'because the woman often helps the man'

Furthermore, adjuncts can occur between the arguments of a VP:

(30) Der the Mann man hat has versucht, tried dem the Kind child heimlich secretly das the Buch book zu to geben. give 'The man tried to secretly give the book to the child.'

In order to analyze these sentences, we can use a metarule which adds an adjunct to the right-hand side of a V2 (Uszkoreit 1987: 146).

(31) V2 → W ↦→ V2 → W, AdvP

By means of the subject introducing metarule in (19), the V3-rule in (28) is derived from a V2-rule. Since there can be several adjuncts in one sentence, a metarule such as (31) must be allowed to apply multiple times. The recursive application of metarules is often ruled out in the literature due to reasons of generative capacity (see Chapter 17) (Thompson 1982; Uszkoreit 1987: 146). If one uses the Kleene star, then it is possible to formulate the adjunct metarule in such as way that it does not have to apply recursively (Uszkoreit 1987: 146):

$$\begin{array}{c} \text{(32)} \quad \text{V2} \rightarrow \text{W} \leftrightarrow\\ \text{V2} \rightarrow \text{W}, \text{AdvP\*} \end{array}$$

If one adopts the rule in (32), then it is not immediately clear how the semantic contribution of the adjuncts can be determined.<sup>2</sup> For the rule in (31), one can combine the semantic contribution of the AdvP with the semantic contribution of the V2 in the input rule. This is of course also possible if the metarule is applied multiple times. If this metarule is applied to (33a), for example, the V2-node in (33a) contains the semantic contribution of the first adverb.

(33) a. V2 → V, NP, AdvP

b. V2 → V, NP, AdvP, AdvP

The V2-node in (33b) receives the semantic representation of the adverb applied to the V2-node in (33a).

Weisweber & Preuss (1992) have shown that it is possible to use metarules such as (31) if one does not use metarules to compute a set of phrase structure rules, but rather directly applies the metarules during the analysis of a sentence. Since sentences are always of finite length and the metarule introduces an additional AdvP to the righthand side of the newly licensed rule, the metarule can only be applied a finite number of times.

# **5.2 Passive as a metarule**

The German passive can be described in an entirely theory-neutral way as follows:<sup>3</sup>


This is true for all verb classes which can form the passive. It does not make a difference whether the verbs takes one, two or three arguments:

	- b. [weil] because noch still gearbeitet worked wurde aux 'because there was still working there'

<sup>2</sup> In LFG, an adjunct is entered into a set in the functional structure (see Section 7.1.6). This also works with the use of the Kleene star notation. From the f-structure, it is possible to compute the semantic denotation with corresponding scope by making reference to the c-structure. In HPSG, Kasper (1994) has made a proposal which corresponds to the GPSG proposal with regard to flat branching structures and an arbitrary number of adjuncts. In HPSG, however, one can make use of so-called relational constraints. These are similar to small programs which can create relations between values inside complex structures. Using such relational constraints, it is then possible to compute the meaning of an unrestricted number of adjuncts in a flat branching structure.

<sup>3</sup> This characterization does not hold for other languages. For instance, Icelandic allows for dative subjects. See Zaenen, Maling & Thráinsson (1985).

	- b. [weil] because an at Maria Maria gedacht thought wurde aux 'because Maria was thought of'
	- b. [weil] because der the.nom Weltmeister world.champion geschlagen beaten wurde aux 'because the world champion was beaten'
	- b. [weil] because ihm him.dat der the.nom Aufsatz essay gegeben given wurde aux 'because he was given the essay'

In a simple phrase structure grammar, we would have to list two separate rules for each pair of sentences making reference to the valence class of the verb in question. The characteristics of the passive discussed above would therefore not be explicitly stated in the set of rules. In GPSG, it is possible to explain the relation between active and passive rules using a metarule: for each active rule, a corresponding passive rule with suppressed subject is licensed. The link between active and passive clauses can therefore be captured in this way.

An important difference to Transformational Grammar/GB is that we are not creating a relation between two trees, but rather between active and passive rules. The two rules license two unrelated structures, that is, the structure of (38b) is not derived from the structure of (38a).

	- b. [weil] because der the.nom Weltmeister world.champion geschlagen beaten wurde aux 'because the world champion was beaten'

The generalization with regard to active/passive is captured nevertheless.

In what follows, I will discuss the analysis of the passive given in Gazdar, Klein, Pullum & Sag (1985) in some more detail. The authors suggest the following metarule for English (p. 59):<sup>4</sup>

(39) VP → W, NP ↦→ VP[pas] → W, (PP[*by*])

This rule states that verbs which take an object can occur in a passive VP without this object. Furthermore, a *by*-PP can be added. If we apply this metarule to the rules in (40), then this will yield the rules listed in (41):


It is possible to use the rules in (40) to analyze verb phrases in active sentences:

(42) a. [<sup>S</sup> The man [VP devoured the carcass]]. b. [<sup>S</sup> The man [VP handed the sword to Tracy]].

The combination of a VP with the subject is licensed by an additional rule (S → NP, VP).

With the rules in (41), one can analyze the VPs in the corresponding passive sentences in (43):

(43) a. [<sup>S</sup> The carcass was [VP[pas] devoured (by the man)]]. b. [<sup>S</sup> The sword was [VP[pas] handed to Tracy (by the man)]].

At first glance, this analysis may seem odd as an object is replaced inside the VP by a PP which would be the subject in an active clause. Although this analysis makes correct predictions with regard to the syntactic well-formedness of structures, it seems unclear how one can account for the semantic relations. It is possible, however, to use a lexical rule that licenses the passive participle and manipulates the semantics of the output lexical item in such a way that the *by*-PP is correctly integrated semantically (Gazdar et al. 1985: 219).

We arrive at a problem, however, if we try to apply this analysis to German since the impersonal passive cannot be derived by simply suppressing an object. The V2-rules for verbs such as *arbeiten* 'work' and *denken* 'think' as used for the analysis of (34a) and (35a) have the following form:


<sup>4</sup> See Weisweber & Preuss (1992: 1114) for a parallel rule for German which refers to accusative case on the left-hand side of the metarule.

#### 5 Generalized Phrase Structure Grammar

There is no NP on the right-hand side of these rules which could be replaced by a *von*-PP. If the passive is to be analyzed as suppressing an NP argument in a rule, then it should follow from the existence of the impersonal passive that the passive metarule has to be applied to rules which license finite clauses, since information about whether there is a subject or not is only present in rules for finite clauses.<sup>5</sup> In this kind of system, the rules for finite sentences (V3) are the basic rules and the rules for V2 would be derived from these.

It would only make sense to have a metarule which applies to V3 for German since English does not have V3 rules which contain both the subject and its object on the righthand side of the rule.<sup>6</sup> For English, it is assumed that a sentence consists of a subject and a VP (see Gazdar et al. 1985: 139). This means that we arrive at two very different analyses for the passive in English and German, which do not capture the descriptive insight that the passive is the suppression of the subject and the subsequent promotion of the object in the same way. The central difference between German and English seems to be that English obligatorily requires a subject,<sup>7</sup> which is why English does not have an impersonal passive. This is a property independent of passives, which affects the possibility of having a passive structure, however.

The problem with the GPSG analysis is the fact that valence is encoded in phrase structure rules and that subjects are not present in the rules for verb phrases. In the following chapters, we will encounter approaches from LFG, Categorial Grammar, HPSG, Construction Grammar, and Dependency Grammar which encode valence separately from phrase structure rules and therefore do not have a principled problem with impersonal passive.

See Jacobson (1987b: 394–396) for more problematic aspects of the passive analysis in GPSG and for the insight that a lexical representation of valence – as assumed in Categorial Grammar, GB, LFG and HPSG – allows for a lexical analysis of the phenomenon, which is however unformulable in GPSG for principled reasons having to do with the fundamental assumptions regarding valence representations.

# **5.3 Verb position**

Uszkoreit (1987) analyzed verb-initial and verb-final order as linearization variants of a flat tree. The details of this analysis have already been discussed in Section 5.1.2.

An alternative suggestion in a version of GPSG comes from Jacobs (1986: 110): Jacobs's analysis is a rendering of the verb movement analysis in GB. He assumes that there is an

<sup>5</sup>GPSG differs from GB in that infinitive verbal projections do not contain nodes for empty subjects. This is also true for all other theories discussed in this book with the exception of Tree-Adjoining Grammar.

<sup>6</sup>Gazdar et al. (1985: 62) suggest a metarule similar to our subject introduction metarule on page 189. The rule that is licensed by their metarule is used to analyze the position of auxiliaries in English and only licenses sequences of the form AUX NP VP. In such structures, subjects and objects are not in the same local tree either.

<sup>7</sup>Under certain conditions, the subject can also be omitted in English. For more on imperatives and other subject-less examples, see page 538.

empty verb in final position and links this to the verb in initial position using technical means which we will see in more detail in the following section.

# **5.4 Long-distance dependencies as the result of local dependencies**

One of the main innovations of GPSG is its treatment of long-distance dependencies as a sequence of local dependencies (Gazdar 1981b). This approach will be explained taking constituent fronting to the prefield in German as an example. Until now, we have only seen the GPSG analysis for verb-initial and verb-final position: the sequences in (45) are simply linearization variants.

	- b. Gibt gives der the.nom Mann man dem the.dat Kind child das the.acc Buch? book 'Does the man give the child the book?'

What we want is to derive the verb-second order in the examples in (46) from V1 order in (45b).

	- b. Dem the.dat Kind child gibt gives der the.nom Mann man das the.acc Buch. book 'The man gives the child the book.'

For this, the metarule in (47) has to be used. This metarule removes an arbitrary category X from the set of categories on the right-hand side of the rule and represents it on the left-hand side with a slash ('/'):<sup>8</sup>

(47) V3 → W, X ↦→ V3/X → W

This rule creates the rules in (49) from (48):


<sup>8</sup>An alternative to Uszkoreit's trace-less analysis (1987: 77), which is explained here, consists of using a trace for the extracted element as in GB.

The rule in (50) connects a sentence with verb-initial order with a constituent which is missing in the sentence:

$$\text{(50)}\quad \text{V3[+FN]}\rightarrow \text{X[+TOP], V3[+MC]/X}$$

In (50), X stands for an arbitrary category which is marked as missing in V3 by the '/'. X is referred to as a *filler*.

The interesting cases of values for X with regard to our examples are given in (51):

(51) V3[+fin] → N2[+top, case nom], V3[+mc]/N2[case nom] V3[+fin] → N2[+top, case dat], V3[+mc]/N2[case dat] V3[+fin] → N2[+top, case acc], V3[+mc]/N2[case acc]

(51) does not show actual rules. Instead, (51) shows examples for insertions of specific categories into the X-position, that is, different instantiations of the rule.

The following linearization rule ensures that a constituent marked by [+top] in (50) precedes the rest of the sentence:

(52) [+top] < X

top stands for *topicalized*. As was mentioned on page 107, the prefield is not restricted to topics. Focused elements and expletives can also occur in the prefield, which is why the feature name is not ideal. However, it is possible to replace it with something else, for instance *prefield*. This would not affect the analysis. X in (52) stands for an arbitrary category. This is a new X and it is independent from the one in (50).

Figure 5.4 shows the interaction of the rules for the analysis of (53).<sup>9</sup>

(53) Dem the.dat Kind child gibt gives er he.nom das the.acc Buch. book 'He gives the child the book.'

Figure 5.4: Analysis of fronting in GPSG

The metarule in (47) licenses a rule which adds a dative object into slash. This rule now licenses the subtree for *gibt er das Buch* 'gives he the book'. The linearization rule

<sup>9</sup> The fin feature has been omitted on some of the nodes since it is redundant: +mc-verbs always require the fin value '+'.

V[+mc] < X orders the verb to the very left inside of the local tree for V3. In the next step, the constituent following the slash is bound off. Following the LP-rule [+top] < X, the bound constituent must be ordered to the left of the V3 node.

The analysis given in Figure 5.4 may seem too complex since the noun phrases in (53) all depend on the same verb. It is possible to invent a system of linearization rules which would allow one to analyze (53) with an entirely flat structure. One would nevertheless still need an analysis for sentences such as those in (37) on page 107 – repeated here as (54) for convenience:

	- c. Wen who glaubst believe du, you daß that ich I \_ gesehen seen habe?<sup>12</sup> have 'Who do you think I saw?'
	- d. [Gegen against ihn] him falle fall es it den the Republikanern Republicans hingegen however schwerer, more.difficult [ [ Angriffe \_ ] attacks zu to lancieren].<sup>13</sup> launch

'It is, however, more difficult for the Republicans to launch attacks against him.'

The sentences in (54) cannot be explained by local reordering as the elements in the prefield are not dependent on the highest verb, but instead originate in the lower clause. Since only elements from the same local tree can be reordered, the sentences in (54) cannot be analyzed without postulating some kind of additional mechanism for longdistance dependencies.<sup>14</sup>

Before I conclude this chapter, I will discuss yet another example of fronting, namely one of the more complex examples in (54). The analysis of (54c) consists of several

<sup>10</sup>taz, 04.05.2001, p. 20.

<sup>11</sup>Spiegel, 8/1999, p. 18.

<sup>12</sup>Scherpenisse (1986: 84).

<sup>13</sup>taz, 08.02.2008, p. 9.

<sup>14</sup>One could imagine analyses that assume the special mechanism for nonlocal dependencies only for sentences that really involve dependencies that are nonlocal. This was done in HPSG by Kathol (1995) and Wetta (2011) and by Groß & Osborne (2009) in Dependency Grammar. I discuss the Dependency Grammar analyses in detail in Section 11.7.1 and show that analyses that treat simple V2 sentences as ordering variants of non-V2 sentences have problems with the scope of fronted adjuncts, with coordination of simple sentences and sentences with nonlocal dependencies and with so-called multiple frontings.

steps: the introduction, percolation and finally binding off of information about the longdistance dependency. This is shown in Figure 5.5. Simplifying somewhat, I assume that

Figure 5.5: Analysis of long-distance dependencies in GPSG

*gesehen habe* 'have seen' behaves like a normal transitive verb.<sup>15</sup> A phrase structure rule licensed by the metarule in (47) licenses the combination of *ich* 'I' and *gesehen habe* 'has seen' and represents the missing accusative object on the V3 node. The complementizer *dass* 'that' is combined with *ich gesehen habe* 'I have seen' and the information about the fact that an accusative NP is missing is percolated up the tree. This percolation is controlled by the so-called *Foot Feature Principle*, which states that all foot features of all the daughters are also present on the mother node. Since the slash feature is a foot feature, the categories following the '/' percolate up the tree if they are not bound off in the local tree. In the final step, the V3/N2[acc] is combined with the missing N2[acc]. The result is a complete finite declarative clause of the highest projection level.

# **5.5 Summary and classification**

This section is for advanced readers. It compares GPSG with theories introduced later in the book, in particular Categorial Grammar and HPSG. I suggest coming back here after reading at least Chapters 6, 8 and 9.

Some twenty years after Chomsky's criticism of phrase structure grammars, the first large grammar fragment in the GPSG framework appeared and offered analyses of phenomena which could not be described by simple phrase structure rules. Although works in GPSG essentially build on Harman's 1963 idea of a transformation-less grammar, they

<sup>15</sup>See Nerbonne (1986a) and Johnson (1986), for analyses of verbal complexes in GPSG.

also go far beyond this. A special achievement of GPSG is, in particular, the treatment of long-distance dependencies as worked out by Gazdar (1981b). By using the slash-mechanism, it was possible to explain the simultaneous extraction of elements from conjuncts (Across the Board Extraction, Ross 1967, Williams 1978: Section 4.2.4.1). The following examples from Gazdar (1981b: 173) show that gaps in conjuncts must be identical, that is, a filler of a certain category must correspond to a gap in every conjunct:

(55) a. The kennel which Mary made and Fido sleeps in has been stolen.

(= S/NP & S/NP)


(= S/NP & S/PP)

GPSG can plausibly handle this with mechanisms for the transmission of information about gaps. In symmetric coordination, the slash elements in each conjunct have to be identical. On the one hand, a transformational approach is not straightforwardly possible since one normally assumes in such analyses that there is a tree and something is moved to another position in the tree thereby leaving a trace. However, in coordinate structures, the filler would correspond to two or more traces and it cannot be explained how the filler could originate in more than one place.

While the analysis of Across the Board extraction is a true highlight of GPSG, there are some problematic aspects that I want to address in the following: the interaction between valence and morphology, the representation of valence and partial verb phrase fronting, and the expressive power of the GPSG formalism.

# **5.5.1 Valence and morphology**

The encoding of valence in GPSG is problematic for several reasons. For example, morphological processes take into account the valence properties of words. Adjectival derivation with the suffix -*bar* '-able' is only productive with transitive verbs, that is, with verbs with an accusative object which can undergo passivization:


A rule for derivations with -*bar* '-able' must therefore make reference to valence information. This is not possible in GPSG grammars since every lexical entry is only assigned a

number which says something about the rules in which this entry can be used. For -*bar*derivations, one would have to list in the derivational rule all the numbers which correspond to rules with accusative objects, which of course does not adequately describe the phenomenon. Furthermore, the valence of the resulting adjective also depends on the valence of the verb. For example, a verb such as *vergleichen* 'compare' requires a *mit* (with)-PP and *vergleichbar* 'comparable' does too (Riehemann 1993: 7, 54; 1998: 68). In the following chapters, we will encounter models which assume that lexical entries contain information as to whether a verb selects for an accusative object or not. In such models, morphological rules which need to access the valence properties of linguistic objects can be adequately formulated.

The issue of interaction of valence and derivational morphology will be taken up in Section 21.2.2 again, where approaches in LFG and Construction Grammar are discussed that share assumptions about the encoding of valence with GPSG.

## **5.5.2 Valence and partial verb phrase fronting**

Nerbonne (1986a) and Johnson (1986) investigate fronting of partial VPs in the GPSG framework. (57) gives some examples: in (57a) the bare verb is fronted and its arguments are realized in the middle field, in (57b) one of the objects is fronted together with the verb, in (57c) the other object is fronted with the verb and in (57d) both objects are fronted with the verb.

	- b. [Ein a.acc Märchen fairy.tale erzählen] tell wird will er he.nom seiner his.acc Tochter daughter müssen. must
	- c. [Seiner his.dat Tochter daughter erzählen] tell wird will er he.nom das the.acc Märchen fairy.tale müssen. must
	- d. [Seiner his.dat Tochter daughter ein a.acc Märchen fairy.tale erzählen] tell wird will er he.nom müssen. must

The problem with sentences such as those in (57) is that the valence requirements of the verb *erzählen* 'to tell' are realized in various positions in the sentence. For fronted constituents, one requires a rule which allows a ditransitive to be realized without its arguments or with one or two objects. This means that the ditransitive verb *erzählen* 'to tell' should have the same valence numbers as *schalfen* 'to sleep', as *helfen* 'to help', as *kennen* 'to know' in addition to its normal ditransitive number. Furthermore, it has to be ensured that the arguments that are missing in the prefield are realized in the remainder of the clause. It is not legitimate to omit obligatory arguments or realize arguments with other properties like a different case, as the examples in (58) show:

	- b. \* Verschlungen devoured hat has er he.nom nicht. not
	- c. \* Verschlungen devoured hat has er he.nom ihm him.dat nicht. not

The obvious generalization is that the fronted and unfronted arguments must add up to the total set belonging to the verb. This is scarcely possible with the rule-based valence representation in GPSG. In theories such as Categorial Grammar (see Chapter 8), it is possible to formulate elegant analyses of (58) (Geach 1970). Nerbonne and Johnson both suggest analyses for sentences such as (58) which ultimately amount to changing the representation of valence information in the direction of Categorial Grammar.

Before I turn to the expressive power of the GPSG formalism, I want to note that the problems that we discussed in the previous subsections are both related to the representation of valence in GPSG. We already run into valence-related problems when discussing the passive in Section 5.2: since subjects and objects are introduced in phrase structure rules and since there are some languages in which subject and object are not in the same local tree, there seems to be no way to describe the passive as the suppression of the subject in GPSG.

## **5.5.3 Generative capacity**

In GPSG, the system of linearization, dominance and metarules is normally restricted by conditions we will not discuss here in such a way that one could create a phrase structure grammar of the kind we saw in Chapter 2 from the specification of a GPSG grammar. Such grammars are also called context-free grammars. In the mid-80s, it was shown that context-free grammars are not able to describe natural language in general, that is it could be shown that there are languages that need more powerful grammar formalisms than context-free grammars (Shieber 1985, Culy 1985; see Pullum (1986) for a historical overview). The so-called *generative capacity* of grammar formalisms is discussed in Chapter 17.

Following the emergence of constraint-based models such as HPSG (see Chapter 9) and unification-based variants of Categorial Grammar (see Chapter 8 and Uszkoreit 1986a), most authors previously working in GPSG turned to other frameworks. The GPSG analysis of long-distance dependencies and the distinction between immediate dominance and linear precedence are still used in HPSG and variants of Construction Grammar to this day. See also Section 12.2 for a Tree Adjoining Grammar variant that separates dominance from precedence.

#### 5 Generalized Phrase Structure Grammar

# **Comprehension questions**


## **Exercises**

	- (59) a. [dass] that der the.nom Mann man ihn him.acc liest reads 'that the man reads it'
		- b. [dass] that ihn him.acc der the.nom Mann man liest reads 'that the man reads it'
		- c. Der the.nom Mann man liest reads ihn. him.acc 'The man reads it.'

Include all arguments in a single rule without using the metarule for introducing subjects.

# **Further reading**

The main publication in GPSG is Gazdar, Klein, Pullum & Sag (1985). This book has been critically discussed by Jacobson (1987b). Some problematic analyses are contrasted with alternatives from Categorial Grammar and reference is made to the heavily Categorial Grammar influenced work of Pollard (1984), which counts as one of the predecessors of HPSG. Some of Jacobson's suggestions can be found in later works in HPSG.

Grammars of German can be found in Uszkoreit (1987) and Busemann (1992). Gazdar (1981b) developed an analysis of long-distance dependencies, which is still used today in theories such as HPSG.

A history of the genesis of GPSG can be found in Pullum (1989b).

# **6 Feature descriptions**

In the previous chapter, we talked about sets of feature-value pairs, which can be used to describe linguistic objects. In this chapter, we will introduce feature descriptions which play a role in theories such as LFG, HPSG, Construction Grammar, versions of Categorial Grammar and TAG (and even some formalizations of Minimalist theories (Veenstra 1998)). This chapter will therefore lay some of the groundwork for the chapters to follow.

Feature structures are complex entities which can model properties of a linguistic object. Linguists mostly work with feature descriptions which describe only parts of a given feature structure. The difference between models and descriptions will be explained in more detail in Section 6.7.

Alternative terms for feature structures are:


Other terms for feature description are the following:


In what follows, I will restrict the discussion to the absolutely necessary details in order to keep the formal part of the book as short as possible. I refer the interested reader to Shieber (1986), Pollard & Sag (1987: Chapter 2), Johnson (1988), Carpenter (1992), King (1994) and Richter (2004, 2021). Shieber's book is an accessible introduction to Unification Grammars. The works by King and Richter, which introduce important foundations for HPSG, would most probably not be accessible for those without a good grounding in mathematics. However, it is important to know that these works exist and that the corresponding linguistic theory is built on a solid foundation.

# **6.1 Feature descriptions**

When describing linguistic signs, we have to say something about their properties. For a noun, we can say that it has case, gender, number and person features. For a word such as *Mannes* 'man', we can say that these features have the values *genitive*, *masculine*, *singular* and *3*. If we were to write these as a list of feature-value pairs, we would arrive at the following feature description:

#### 6 Feature descriptions

(1) Feature-value pair for *Mannes*:

 case *genitive* gender *masculine* number *singular* person *3* 

It is possible to describe a variety of different things using feature descriptions. For example, we can describe a person as in (2):

(2) firstname *max* lastname *meier* date-of-birth *10.10.1985* 

People are related to other people – a fact that can also be expressed in feature-value pairs. For example, the fact that Max Meier has a father called Peter Meier can be captured by expanding (2) as follows:


The value of the father feature is another feature description containing the same features as (2).

In feature descriptions, a *path* is a sequence of features which immediately follow each other. The *value of a path* is the feature description at the end of the path. Therefore, the value of father|date-of-birth is *10.05.1960*.

One can think of many different features that could be included in representations such as (3). One may wonder how to integrate information about offspring into (3).

An obvious solution would be to add features for daughter and son:


This solution is not satisfactory as it is not immediately clear how one could describe a person with several daughters. Should one really introduce features such as daughter-1 or daughter-3?

(5) firstname *max* lastname *meier* date-of-birth *10.10.1985* father *…* mother *…* daughter-1 *…* daughter-2 *…* daughter-3 *…* 

How many features do we want to assume? Where is the limit? What would the value of daughter-32 be?

For this case, it makes much more sense to use a list. Lists are indicated with angle brackets. Any number of elements can occur between these brackets. A special case is when no element occurs between the brackets. A list with no elements is also called *empty list*. In the following example, Max Meier has a daughter called Clara, who herself has no daughter.


Now, we are left with the question of sons. Should we add another list for sons? Do we want to differentiate between sons and daughters? It is certainly the case that the gender of the children is an important property, but these are properties of the objects themselves, since every person has a gender. The description in (7) therefore offers a more adequate representation.

At this point, one could ask why the parents are not included in a list as well. In fact, we find similar questions also in linguistic works: how is information best organized for the job at hand? One could argue for the representation of descriptions of the parents under separate features, by pointing out that with such a representation it is possible to make certain claims about a mother or father without having to necessarily search for the respective descriptions in a list.

If the order of the elements is irrelevant, then we could use sets rather than lists. Sets are written inside curly brackets.<sup>1</sup>

<sup>1</sup> The definition of a set requires many technicalities. In this book, I would use sets only for collecting semantic information. This can be done equally well using lists, which is why I do not introduce sets here and instead use lists.

#### 6 Feature descriptions


# **6.2 Types**

In the previous section, we introduced feature descriptions consisting of feature-value pairs and showed that it makes sense to allow for complex values for features. In this section, feature descriptions will be augmented to include types. Feature descriptions which are assigned a type are also called *typed feature descriptions*. Types say something about which features can or must belong to a particular structure. The description previously discussed describes an object of the type *person*.


Types are written in *italics*.

The specification of a type determines which properties a modeled object has. It is then only possible for a theory to say something about these properties. Properties such as operating voltage are not relevant for objects of the type *person*. If we know the type of a given object, then we also know that this object must have certain properties even if we do not yet know their exact values. In this way, (9) is still a description of Max Meier even though it does not contain any information about Max's date of birth:


We know, however, that Max Meier must have been born on some day since this is a description of the type *person*. The question *What is Max' date of birth?* makes sense for a structure such as (9) in a way that the question *Which operating voltage does Max have?* does not. If we know that an object is of the type *person*, then we have the following basic structure:

> 


In (10) and (9), the values of features such as firstname are in italics. These values are also types. They are different from types such as *person*, however, as no features belong to them. These kinds of types are called *atomic*.

Types are organized into hierarchies. It is possible to define the subtypes *woman* and *man* for *person*. These would determine the gender of a given object. (11) shows the feature description for the types *woman* and *man*.


At this point, we could ask ourselves if we really need the feature gender. The necessary information is already represented in the type *woman*. The question if specific information is represented by special features or whether it is stored in a type without a corresponding individual feature will surface again in the discussion of linguistic analyses. Both alternatives differ mostly in the fact that the information which is modeled by types is not immediately accessible for structure sharing, which is discussed in Section 6.4.

Type hierarchies play an important role in capturing linguistic generalizations, which is why type hierarchies and the inheritance of constraints and information will be explained with reference to a further example in what follows. One can think of type hierarchies as an effective way of organizing information. In an encyclopedia, the individual entries are linked in such a way that the entries for monkey and mouse will each contain a pointer to mammal. The description found under mammal does therefore not have to be repeated for the subordinate concepts. In the same way, if one wishes to

#### 6 Feature descriptions

Figure 6.1: Non-linguistic example of multiple inheritance

describe various electric appliances, one can use the hierarchy in Figure 6.1. The most general type *electrical device* is the highest in Figure 6.1. Electrical devices have certain properties, e.g., a power supply with a certain power consumption. All subtypes of *electrical device* "inherit" this property. In this way, *printing device* and *scanning device* also have a power supply with a specific power consumption. A *printing device* can produce information and a *scanning device* can read in information. A *photocopier* can both produce information and read it. Photocopiers have both the properties of scanning and printing devices. This is expressed by the connection between the two superordinate types and *photocopier* in Figure 6.1. If a type is at the same time the subtype of several superordinate types, then we speak of *multiple inheritance*. If devices can print, but not scan, they are of type *printer*. This type can have further more specific subtypes, which in turn may have particular properties, e.g., *laser printer*. New features can be added to subtypes, but it is also possible to make values of inherited features more specific. For example, the material that can be scanned with a *negative scanner* is far more restricted than that of the supertype *scanner*, since negative scanners can only scan negatives.

The objects that are modeled always have a maximally specific type. In the example above, this means that we can have objects of the type *laser printer* and *negative scanner* but not of the type *printing device*. This is due to the fact that *printing device* is not maximally specific since this type has two subtypes.

Type hierarchies with multiple inheritance are an important means for expressing linguistic generalizations (Flickinger, Pollard & Wasow 1985, Flickinger 1987, Sag 1997). Types of words or phrases which occur at the very top of these hierarchies correspond to constraints on linguistic objects, which are valid for linguistic objects in all languages. Subtypes of such general types can be specific to certain languages or language classes.

# **6.3 Disjunction**

Disjunctions can be used if one wishes to express the fact that a particular object can have various different properties. If one were to organize a class reunion twenty years after leaving school and could not recall the exact names of some former classmates, it would be possible to search the web for "Julia (Warbanow or Barbanow)". In feature descriptions, this "or" is expressed by a '∨'.


Some internet search engines do not allow for searches with 'or'. In these cases, one has to carry out two distinct search operations: one for "Julia Warbanow" and then another for "Julia Barbanow". This corresponds to the two following disjunctively connected descriptions:


Since we have type hierarchies as a means of expression, we can sometimes do without disjunctive specification of values and instead state the supertype: for *printer* ∨ *photocopier*, one can simply write *printing device* if one assumes the type hierarchy in Figure 6.1 on the preceding page.

# **6.4 Structure sharing**

Structure sharing is an important part of the formalism. It serves to express the notion that certain parts of a structure are identical. A linguistic example for the identity of values is agreement. In sentences such as (14), the number value of the noun phrase has to be identical to that of the verb:

	- b. Die the Männer men schlafen. sleep 'The men are sleeping.'
	- c. \* Der the Mann man schlafen. sleep Intended: 'The man are sleeping.'

The identity of values is indicated by boxes containing numbers. The boxes can also be viewed as variables.

When describing objects we can make claims about equal values or claims about identical values. A claim about the identity of values is stronger. Let us take the following feature description containing information about the children that Max's father and mother have as an example:

#### 6 Feature descriptions

Notice that under the paths father|children and mother|children, we find a list containing a description of a person with the first name Klaus. The question of whether the feature description is of one or two children of Peter and Anna cannot be answered. It is certainly possible that we are dealing with two different children from previous partnerships who both happen to be called Klaus.

By using structure sharing, it is possible to specify the identity of the two values as in (16). In (16), Klaus is a single child that belongs to both parents. Everything inside

the brackets which immediately follow 1 is equally present in both positions. One can think of 1 as a pointer or reference to a structure which has only been described once.

One question still remains open: what about Max? Max is also a child of his parents and should therefore also occur in a list of the children of his parents. There are two points in (16) where there are three dots. These ellipsis marks stand for information about the other children of Peter and Anna Meier. Our world knowledge tells us that both of them must have the same child namely Max Meier himself. In the following section, we will see how this can be expressed in formal terms.

# **6.5 Cyclic structures**

We have introduced structure sharing in order to be able to express the fact that Max's parents both have a son Klaus together. It would not be enough to list Max in the childlists of his parents separately. We want to capture the fact that it is the same Max which appears in each of these lists and, furthermore, we have to ensure that the child being described is identical to the entire object being described. Otherwise, the description would permit a situation where Max's parents could have a second child also called Max. The description given in (17) can capture all facts correctly.

Structures such as those described in (17) are called cyclic because one ends up going in a circle if one follows a particular path: e.g., the path father|children|…|father| children|… 2 can be potentially repeated an infinite number of times.

# **6.6 Unification**

Grammatical rules are written exactly like lexical entries in HPSG and Construction Grammar and are done so with the help of feature descriptions. For a word or a larger

<sup>2</sup> The dots here stand for the path to 2 in the list which is the value of children. See Exercise 3.

#### 6 Feature descriptions

phrasal entity to be usable as daughter in a phrase licensed by some grammatical rule, the word or phrase must have properties which are compatible with the description of the daughters in the grammatical rule. If this kind of compatibility exists, then we can say that the respective items are *unifiable*. 3 If one unifies two descriptions, the result is a description which contains information from both descriptions but no additional information.

The way unification works can be demonstrated with feature descriptions describing people. One can imagine that Bettina Kant goes to the private detective Max Müller and wants to find a specific person. Normally, those who go to a detective's office only come with a partial description of the person they are looking for, e.g., the gender, hair color or date of birth. Perhaps even the registration number of the car belonging to the person is known.

It is then expected of the detective that he or she provides information fitting the description. If we are looking for a blond female named Meier (18a), then we do not want to get descriptions of a male red-head (18b). The descriptions in (18) are incompatible and cannot be unified:


The description in (19) would be a possible result for a search for a blond, female individual called Meier:


<sup>3</sup> The term *unification* should be used with care. It is only appropriate if certain assumptions with regard to the formal basis of linguistic theories are made. Informally, the term is often used in formalisms where unification is not technically defined. In HPSG, it mostly means that the constraints of two descriptions lead to a single description. What one wants to say here, intuitively, is that the objects described have to satisfy the constraints of both descriptions at the same time (*constraint satisfaction*). Since the term *unification* is so broadly-used, it will also be used in this section. The term will not play a role in the remaining discussions of theories with the exception of explicitly unification-based approaches. In contrast, the concept of constraint satisfaction presented here is very important for the comprehension of the following chapters.

Katharina Meier could also have other properties unknown to the detective. The important thing is that the properties known to the detective match those that the client is looking for. Furthermore, it is important that the detective uses reliable information and does not make up any information about the sought object. The unification of the search in (18a) and the information accessible to the detective in (19) is in fact (19) and not (20), for example:


(20) contains information about children, which is neither contained in (18a) nor in (19). It could indeed be the case that Katharina Meier has no children, but there are perhaps several people called Katharina Meier with otherwise identical properties. With this invented information, we might exclude one or more possible candidates.

It is possible that our detective Max Müller does not have any information about hair color in his files. His files could contain the following information:


These data are compatible with the search criteria. If we were to unify the descriptions in (18a) and (21), we would get (19). If we assume that the detective has done a good job, then Bettina Kant now knows that the person she is looking for has the properties of her original search plus the newly discovered properties.

# **6.7 Phenomena, models and formal theories**

In the previous sections, we introduced feature descriptions with types. These feature descriptions describe typed feature structures, which are models of observable linguistic structures. In the definitions of types, one determines which properties of linguistic objects should be described. The type hierarchy together with type definitions is also referred to as a *signature*. As a grammarian, one typically uses types in feature descriptions. These descriptions contain constraints which must hold for linguistic objects. If no constraints are given, all values that are compatible with the specification in the signature are possible values. For example, one can omit the case description of a linguistic object such as *Frau* 'woman' since *Frau* can – as shown in (22) – appear in all four cases:


In a given model, there are only fully specified representations, that is, the model contains four forms of *Frau*, each with a different case. For masculine nouns such as *Mann* 'man', one would have to say something about case in the description since the genitivesingular form *Mann-es* differs from other singular forms, which can be seen by adding *Mann* into the examples in (22). (23) shows the feature descriptions for *Frau* 'woman' and *Mann* 'man':

(23) a. Frau 'woman': - gender *fem* b. Mann 'man': gender *mas* case *nominative* <sup>∨</sup> *dative* <sup>∨</sup> *accusative*

Unlike (23b), (23a) does not contain a case feature since we do not need to say anything special about case in the description of *Frau*. Since all nominal objects require a case feature, it becomes clear that the structures for *Frau* must actually also have a case feature. The value of the case feature is of the type *case*. *case* is a general type which subsumes the subtypes *nominative*, *genitive*, *dative* and *accusative*. Concrete linguistic objects always have exactly one of these maximally specified types as their case value. The feature structures belonging to (23) are given in Figure 6.2 and Figure 6.3.

Figure 6.2: Feature structures for the description of *Frau* 'woman' in (23a)

In these representations, each node has a certain type (*noun*, *fem*, *nominative*, …) and the types in feature structures are always maximally specific, that is, they do not have any further subtypes. There is always an entry node (*noun* in the example above) and 6.7 Phenomena, models and formal theories

Figure 6.3: Feature structures for the description of *Mann* 'man' in (23b)

the other nodes are connected with arrows that are annotated with the feature labels (gender, case).

If we return to the example with people from the previous sections, we can capture the difference between a model and a description as follows: if we have a model of people that includes first name, last name, date of birth, gender and hair color, then it follows that every object we model also has a birthday. We can, however, decide to omit these details from our descriptions if they do not play a role for stating constraints or formulating searches.

The connection between linguistic phenomena, the model and the formal theory is shown in Figure 6.4, which is adapted from Pollard & Sag (1994: 7). The model is designed

Figure 6.4: Phenomenon, model and formal theory according to Netter (1998: 26)

to model linguistic phenomena. Furthermore, it must be licensed by our theory. The theory determines the model and makes predictions with regard to possible phenomena.

#### 6 Feature descriptions

# **Comprehension questions**



# **Exercises**


4. (Additional exercise) The relation *append* will play a role in Chapter 9. This relation serves to combine two lists to form a third. Relational constraints such as *append* do in fact constitute an expansion of the formalism. Using relational constraints, it is possible to relate any number of feature values to other values, that is, one can write programs which compute a particular value depending on other values. This poses the question as to whether one needs such powerful descriptive tools in a linguistic theory and, if we do allow them, what kind of complexity we afford them. A theory which can do without relational constraints should be preferred over one that uses relational constraints (see Müller 2007a: Chapter 20 for a comparison of theories).

For the concatenation of lists, there is a possible implementation in feature structures without recourse to relational constraints. Find out how this can be done. Give your sources and document how you went about finding the solution.

## **Further reading**

This chapter was designed to give the reader an easy-to-follow introduction to typed feature descriptions. The mathematical properties of the structures, type hierarchies and the combinatorial possibilities of such structures could not be discussed in detail here, but knowledge of at least part of these properties is important for work in computational linguistics and in developing one's own analyses. For more information, I refer the interested reader to the following publications: Shieber (1986) is a short introduction to the theory of Unification Grammar. It offers a relatively general overview followed by the discussion of important grammar types such as DCG, LFG, GPSG, HPSG, PATR-II. Johnson (1988) describes the formalism of untyped feature structures in a mathematically precise way. Carpenter (1992) enters into details about the mathematical aspects of typed feature structures. The formalism developed by King (1999) for HPSG-grammars forms the basis for the formalism by Richter (2004), which currently counts as the standard formalism for HPSG. See also Richter (2021) for an overview of the formal properties of HPSG grammars.

This chapter introduced typed feature structures/descriptions. Frameworks like LFG do not use types and type hierarchies, but they use macros instead. The formal underpinnings of LFG, including assumptions regarding models, differ slightly from those presented here. Przepiórkowski (2023) discusses both LFG and HPSG assumptions about the model theoretic foundations of the respective frameworks and points out commonalities and differences.

# **7 Lexical Functional Grammar**

Lexical Functional Grammar (LFG) was developed in the 80s by Joan Bresnan and Ron Kaplan (Bresnan & Kaplan 1982). LFG forms part of so-called West-Coast linguistics: unlike MIT, where Chomsky works and teaches, the institutes of researchers such as Joan Bresnan and Ron Kaplan are on the west coast of the USA (Joan Bresnan in Stanford and Ron Kaplan at Xerox in Palo Alto and now at the language technology firm Nuance Communications in the Bay Area in California).

Bresnan & Kaplan (1982) view LFG explicitly as a psycholinguistically plausible alternative to transformation-based approaches. For a discussion of the requirements regarding the psycholinguistic plausibility of linguistics theories, see Chapter 15.

The more in-depth works on German are Berman (1996, 2003a) and Cook (2001).

LFG has well-designed formal foundations (Kaplan & Bresnan 1982, Kaplan 1989), and hence first implementations were available rather quickly (Frey & Reyle 1983b,a, Yasukawa 1984, Block & Hunze 1986, Eisele & Dörre 1986, Wada & Asher 1986, Delmonte 1990, Her, Higinbotham & Pentheroudakis 1991, Kohl 1992, Kohl, Gardent, Plainfossé, Reape & Momma 1992, Kaplan & Maxwell III 1996, Mayo 1997, 1999, Boullier & Sagot 2005a,b, Clément 2009, Clément & Kinyon 2001).

The following is a list of languages with implemented LFG fragments, probably incomplete:


#### 7 Lexical Functional Grammar


Many of theses grammars were developed in the ParGram consortium<sup>1</sup> (Butt, King, Niño & Segond 1999, Butt, Dyvik, King, Masuichi & Rohrer 2002). Apart from these grammars there is a small fragment of Northern Sotho, which is currently being expanded (Faaß 2010).

Many of the LFG systems combine linguistically motivated grammars with a statistical component. Such a component can help to find preferred readings of a sentence first, it can increase the efficiency of processing and make the complete processing robust (for instance Kaplan et al. 2004, Riezler et al. 2002). Josef van Genabith's group in Dublin is working on the induction of LFG grammars from corpora (e.g., Johnson et al. 1999, O'Donovan et al. 2005, Cahill et al. 2005, Chrupala & van Genabith 2006, Guo et al. 2007, Cahill et al. 2008, Schluter & van Genabith 2009).

<sup>1</sup> http://pargram.w.uib.no/research-groups/. 2022-11-24.

Some of the systems can be tested online:


# **7.1 General remarks on the representational format**

LFG assumes multiple levels of representation.<sup>2</sup> The most important are c-structure and f-structure. c-structure is the constituent structure and it is licensed by a phrase structure grammar. This phrase structure grammar uses X structures for languages for which this is appropriate. f-structure stands for functional structure. Functional structure contains information about the predicates involved and about the grammatical functions (subject, object, …) which occur in a constituent. Mappings mediate between these representational levels.

### **7.1.1 Functional structure**

In LFG, grammatical functions such as subject and object play a very important role. Unlike in most other theories discussed in this book, they are primitives of the theory. A sentence such as (1a) will be assigned a functional structure as in (1b):

(1) a. David devoured a sandwich.

b. pred 'DEVOUR⟨SUBJ, OBJ⟩' subj pred 'DAVID' obj spec A pred 'SANDWICH' 

All lexical items that have a meaning (e.g., nouns, verbs, adjectives) contribute a pred feature with a corresponding value. The grammatical functions governed by a head (government = subcategorization) are determined in the specification of pred.<sup>3</sup> Corresponding functions are called *governable grammatical functions*. Examples of these are shown in Table 7.1 on the next page (Dalrymple 2006). The PRED specification corresponds to the theta grid in GB theory. The valence of a head is specified by the pred value.

The non-governable grammatical functions are given in Table 7.2 on the following page. Topic and focus are information-structural terms. There are a number of works on their exact definition, which differ to varying degrees (Kruijff-Korbayová & Steedman

<sup>2</sup> The English examples and their analyses discussed in this section are taken from Dalrymple (2001) and Dalrymple (2006).

<sup>3</sup> In the structure in (1b), the SUBJ and OBJ in the list following *devour* are identical to the values of SUBJ and OBJ in the structure. For reasons of presentation, this will not be explicitly indicated in this structure and following structures.


Table 7.1: Governable grammatical functions

Table 7.2: Non-governable grammatical functions


2003: 253–254), but broadly speaking, one can say that the focus of an utterance constitutes new information and that the topic is old or given information. Bresnan (2001: 97) uses the following question tests in order to determine topic and focus:


f-structures are characterized using functional descriptions, for example, one can refer to a value of the feature tense in the functional structure using the following expression:

(4) ( TENSE)

It is possible to say something about the value which this feature should have in the feature description. The following descriptions express the fact that in the structure , the feature TENSE must have the value PAST.

(5) ( TENSE) = PAST

The value of a feature may also be a specific f-structure. The expression in (6) ensures that the subj feature in is the f-structure :

(6) ( SUBJ) =

For the analysis of (7a), we get the constraints in (7b):

	- b. ( PRED) = 'SNEEZE⟨SUBJ⟩' ( TENSE) = PAST ( SUBJ) = ( PRED) = 'DAVID'

The description in (7b) describes the following structure:

(8) : pred 'SNEEZE⟨SUBJ⟩' tense PAST subj : h pred 'DAVID'<sup>i</sup> 

But (7b) also describes many other structures which contain further features. We are only interested in minimal structures that contain the information provided in the description.

(9) shows how a node in the c-structure can be connected to the f-structure for the entire sentence:

The function from the NP-node to the f-structure corresponding to the NP is depicted with an arrow marked .

A phrase and its head always correspond to the same f-structure:

(10) V′ V sneezed pred 'SNEEZE⟨SUBJ⟩' tense PAST 

In LFG grammars of English, the CP/IP system is assumed as in GB theory (see Section 3.1.5). IP, I′ and I (and also VP) are mapped onto the same f-structure.

(11) a. David is yawning.

f-structures have to fulfill two well-formedness conditions: they have to be both *complete* and *coherent*. Both these conditions will be discussed in the following sections.

# **7.1.2 Completeness**

Every head adds a constraint of the pred value of the corresponding f-structure. In determining completeness, one has to check that the elements required in the pred value are actually realized. In (12b), obj is missing a value, which is why (12a) is ruled out by the theory.

(12) a. \* David devoured. b. pred 'DEVOUR⟨SUBJ,OBJ⟩' subj pred 'DAVID' 

# **7.1.3 Coherence**

 

The Coherence Condition requires that all argument functions in a given f-structure have to be selected in the value of the local pred attribute. (13a) is ruled out because comp does not appear under the arguments of *devour*.

 

(13) a. \* David devoured a sandwich that Peter sleeps.


The constraints on completeness and coherence together ensure that all and only those arguments required in the PRED specification are actually realized. Both of those constraints taken together correspond to the Theta-Criterion in GB theory (see page 92).<sup>4</sup>

<sup>4</sup> For the differences between predicate-argument structures in LFG and the deep structure oriented Theta Criterion, see Bresnan & Kaplan (1982: xxvi–xxviii).

### **7.1.4 Restrictions on the c-structure/f-structure relation**

Symbols in c-structures are assigned restrictions for f-structures. The following symbols are used: '↑ ' refers to the f-structure of the immediately dominating node and '↓' refers to the f-structure of the c-structure node bearing the annotation. A common annotation is '↑ = ↓'. This constraint states that the f-structure of the mother node is identical to that of the annotated category:

$$\begin{array}{ll} \text{(14)} \quad \text{V}' \rightarrow & \text{V} \\ & \uparrow = \downarrow \\ \text{f-structure of the mother} = \text{own f-structure} \end{array}$$

The annotation '↑ = ↓' is below the head of a structure.

Phrases which are licensed by the annotated c-structure in (14) can be visualized as follows:

$$
\stackrel{\text{(15)}}{\underset{\text{v}}{\rightleftharpoons}}
\stackrel{\text{v}}{\underset{\text{v}}{\bigtriangleup}}
$$

(16) shows a V′ rule with an object:

$$\begin{array}{ccccc} \text{(16)} & \text{V}' & \rightarrow & \text{V} & \text{NP} \\ & & \uparrow = \downarrow & \text{(\uparrow OBI)} = \downarrow \end{array}$$

The annotation on the NP signals that the obj value in the f-structure of the mother (↑ OBJ) is identical to the f-structure of the NP node, that is, to everything that is contributed from the material below the NP node (↓). This is shown in the figure in (17):

In the equation (↑ OBJ) = ↓, the arrows '↑ ' and '↓' correspond to feature structures. '↑ ' and '↓' stand for the and in equations such as (6).

(18) is an example with an intransitive verb and (19) is the corresponding visualization:

$$\begin{array}{ll} \text{(18)} \quad sneezed \quad \text{V} & \text{(\uparrow \text{ PRED})} = \text{'SNEZE} \text{(SUBJ)'}\\ & \text{(\uparrow \text{ TENSE})} = \text{PAST} \end{array}$$

(19) V sneezed pred 'SNEEZE⟨SUBJ⟩' tense PAST

### **7.1.5 Semantics**

Following Dalrymple (2006: 90–92), *glue semantics* is the dominant approach to semantic interpretation in LFG (Dalrymple, Lamping & Saraswat 1993; Dalrymple 2001: Chapter 8). There are, however, other variants where Kamp's discourse representation structures (Kamp & Reyle 1993) are used (Frey & Reyle 1983b,a).

In the following, glue semantics will be presented in more detail.<sup>5</sup> Under a glue-based approach, it is assumed that f-structure is the level of syntactic representation which is crucial for the semantic interpretation of a phrase, that is, unlike in GB theory, it is not the position of arguments in the tree which plays a role in the composition of meaning, but rather functional relations such as SUBJ and OBJ. Glue semantics assumes that each substructure of the f-structure corresponds to a semantic resource connected to a meaning and, furthermore, that the meaning of a given f-structure comes from the sum of these parts. The way the meaning is assembled is regulated by certain instructions for the combination of semantic resources. These instructions are given as a set of logic premises written in linear logic as *glue language*. The computation of the meaning of an utterance corresponds to a logical conclusion.

This conclusion is reached on the basis of logical premises contributed by the words in an expression or possibly even by a syntactic construction itself. The requirements on how the meaning of the parts can be combined to yield the full meaning are expressed in linear logic, a resource-based logic. Linear logic is different from classic logic in that it does not allow that premises of conclusions are not used at all or more than once in a derivation. Hence, in linear logic, premises are resources which have to be used. This corresponds directly to the use of words in an expression: words contribute to the entire meaning exactly once. It is not possible to ignore them or to use their meaning more than once. A sentence such as *Peter knocked twice.* does not mean the same as *Peter knocked*. The meaning of *twice* must be included in the full meaning of the sentence. Similarly, the sentence cannot mean the same as *Peter knocked twice twice.*, since the semantic contribution of a given word cannot be used twice.

The syntactic structure for the sentence in (20a) together with its semantic representation is given in (20b):

(20) a. David yawned.

The semantic structure of this sentence is connected to the f-structure via the correspondence function (depicted here as a dashed line). The semantic representation is derived from the lexical information for the verb *yawned*, which is given in (21).

(21) .*yawn*′ () : (↑ SUBJ) −◦ ↑

This formula is referred to as the *meaning constructor*. Its job is to combine the meaning of *yawned* – a one place predicate .*yawn*′ () – with the formula (↑ SUBJ) −◦ ↑ in

<sup>5</sup> The following discussion heavily draws from the corresponding section of Dalrymple (2006). (It is a translation of my translation of the original material into German.)

linear logic. Here, the connective −◦ is the *linear implication* symbol of linear logic. The symbol contains the meaning that *if* a semantic resource (↑ SUBJ) for the meaning of the subject is available, *then* a semantic resource for ↑ must be created which will stand for the entire meaning of the sentence. Unlike the implication operator of classic logic, the linear implication must consume and produce semantic resources: the formula (↑ SUBJ) −◦ ↑ states that if a semantic resource (↑ SUBJ) is found, it is consumed and the semantic resource ↑ is produced.

Furthermore, it is assumed that a proper name such as *David* contributes its own semantic structure as a semantic resource. In an utterance such as *David yawned*, this resource is consumed by the verb *yawned*, which requires a resource for its SUBJ in order to produce the resource for the entire sentence. This corresponds to the intuition that a verb in any given sentence requires the meaning of its arguments in order for the entire sentence to be understood.

The f-structure of *David yawned* with the instantiated meaning construction contributed by *David* and *yawned* is given in (22):

(22) : " pred 'YAWN⟨SUBJ⟩' subj : - pred 'DAVID' # **[David]** *david*′ : **[yawn]** .*yawn* ′ () : −◦

The left side of the meaning constructor marked by **[David]** is the meaning of the proper name *David*, *david*′ to be precise. The left-hand side of the meaning constructor **[yawn]** is the meaning of the intransitive verb – a one-place predicate .*yawn*′ ().

Furthermore, one must still postulate further rules to determine the exact relation between the right-hand side (the glue) of the meaning constructors in (22) and the lefthand side (the meaning). For simple, non-implicational meaning constructors such as **[David]** in (22), the meaning on the left is the same as the meaning of the semantic structure on the right. Meaning constructors such as **[yawn]** have a -expression on the left, which has to be combined with another expression via functional application (see Section 2.3). The linear implication on the right-hand side must be applied in parallel. This combined process is shown in (23).

$$\begin{array}{c} \text{(23)} \\ \qquad \qquad \begin{array}{c} \text{x} : f\_{\sigma} \\ \text{P} : f\_{\sigma} \multimap g\_{\sigma} \end{array} \\ \hline \begin{array}{c} \text{(23)} : g\_{\sigma} \end{array} \end{array}$$

The right-hand side of the rule corresponds to a logical conclusion following the *modus ponens* rule. With these correspondences between expressions in linear logic and the meanings themselves, we can proceed as shown in (24), which is based on Dalrymple (2006: 92). After combining the respective meanings of *yawned* and *David* and then carrying out -reduction, we arrive at the desired result of *yawn*′ (*david*′ ) as the meaning of *David yawned*.


Glue analyses of quantification, modification and other phenomena have been investigated in a volume on glue semantics (Dalrymple 1999). Particularly problematic for these approaches are cases where there appear to be too many or too few resources for the production of utterances. These kinds of cases have been discussed by Asudeh (2004).

## **7.1.6 Adjuncts**

Adjuncts are not selected by their head. The grammatical function adj is a non-governable grammatical function. Unlike arguments, where every grammatical function can only be realized once, a sentence can contain multiple adjuncts. The value of adj in the f-structure is therefore not a simple structure as with the other grammatical functions, but rather a set. For example, the f-structure for the sentence in (25a) contains an adj set with two elements: one for *yesterday* and one for *at noon*.

(25) a. David devoured a sandwich at noon yesterday.

$$\begin{array}{|c|c|c|}
\hline
\text{ } & \text{'DEVOUR (SUBJ, OBJ)'} \\
\text{'SUBJ} & \text{'PRED 'DAVI'} \\
\text{b.} & \begin{bmatrix} \text{spEC} & \text{A} \\ \text{PRED} & \text{'SAND} \text{WICH'} \end{bmatrix} \\
\text{ADJ} & \left\{ \begin{bmatrix} \text{PRED 'YESTERDAY'} \end{bmatrix}, \begin{bmatrix} \text{PRED 'AT\{OBJ\}'} \\ \text{OBJ} \end{bmatrix} \begin{bmatrix} \text{PRED 'AT\{OBJ\}'} \end{bmatrix} \right\} \\
\hline
\end{array}$$

The annotation on the c-structure rule for adjuncts requires that the f-structure of the adjuncts be part of the adj set of the mother's f-structure:

$$\begin{array}{ccccc} \text{(26)} & \text{V}' & \rightarrow & \text{V}' & \text{PP} \\ & & \uparrow = \downarrow & \downarrow \in \text{(\uparrow ADJ)} \end{array}$$

The representation of adjuncts in a set is not sufficient to characterize the meaning of an utterance containing scope-bearing adjuncts (as for instance the negation in sentences like (31) on page 104). In order to determine scopal relations, one has to refer to the linear order of the adjuncts, that is, the c-structure. For linearization restrictions in LFG, see Zaenen & Kaplan (1995).

# **7.2 Passive**

Bresnan & Mchombo (1995) argue that one should view words as "atoms" of which syntactic structure is comprised (*lexical integrity*<sup>6</sup> ). Syntactic rules cannot create new words or make reference to the internal structure of words. Every terminal node (each "leaf" of the tree) is a word. It follows from this that analyses such as the GB analysis of Pollock (1989) in Figure 7.1 on the next page for the French example in (27) are ruled out (the figure is taken from Kuhn 2007: 617):

(27) Marie Marie ne neg parlerait speak.cond.3sg pas neg 'Marie would not speak.'

In Pollock's analysis, the various morphemes are in specific positions in the tree and are combined only after certain movements have been carried out.

The assumption of lexical integrity is made by all theories discussed in this book with the exception of GB and Minimalism. However, formally, this is not a must as it is also possible to connect morphemes to complex syntactic structures in theories such as Categorial Grammar, GPSG, HPSG, CxG, DG and TAG (Müller 2018b: Section 4). As far as I know, this kind of analysis has never been proposed.

Bresnan noticed that, as well as passivized verbs, there are passivized adjectives which show the same morphological idiosyncrasies as the corresponding participles (Bresnan 1982b: 21; Bresnan 2001: 31). Some examples are given in (28):

	- b. a recently given talk (give given)
	- c. my broken heart (break broken)
	- d. an uninhabited island (inhabit inhabited)
	- e. split wood (split split)

If one assumes lexical integrity, then adjectives have to be derived in the lexicon. If the verbal passive were not a lexical process, but rather a phrase-structural one, then the form identity would remain unexplained.

In LFG, grammatical functions are primitives, that is, they are not derived from a position in the tree (e.g., Subject = SpecIP). Words (fully inflected word-forms) determine the

<sup>6</sup> See Anderson (1992: 84) for more on lexical integrity.

Figure 7.1: Pollock's analysis of *Marie ne parlerait pas* 'Marie would not speak.' according to Kuhn (2007: 617)

grammatical function of their arguments. Furthermore, there is a hierarchy of grammatical functions. During participle formation in morphology, the highest verbal argument is suppressed. The next highest argument moves up and is not realized as the object but rather as the subject. This was explicitly encoded in earlier work (Bresnan 1982b: 8):

(29) Passivization rule: (SUBJ) ↦→ ∅/(OBL) (OBJ) ↦→ (SUBJ)

The first rule states that the subject is either not realized (∅) or it is realized as an oblique element (the *by*-PP in English). The second rule states that if there is an accusative object, this becomes the subject.

In later work, the assignment of grammatical functions was taken over by Lexical Mapping Theory (Bresnan & Kanerva 1989). It is assumed that thematic roles are ordered in a universally valid hierarchy (Bresnan & Kanerva 1989; Bresnan 2001: 307): agent > beneficiary > experiencer/goal > instrument > patient/theme > locative. Patient-like roles are marked as unrestricted ([−r]) in a corresponding representation, the so-called a-structure. Secondary patient-like roles are marked as *objective* ([+o]) and all other roles are marked as non-objective ([−o]). For the transitive verb *schlagen* 'to beat', we have the following:

(30) Agent Patient a-structure *schlagen* 'beat' ⟨ x y ⟩ [−o] [−r]

The mapping of a-structure to f-structure is governed by the following restrictions:

	- b. The argument roles are connected to grammatical functions as shown in the following table. Non-specified values for o and r are to be understood as '+':
		- [−r] [+r] [−o] SUBJ OBL
		- [+o] OBJ OBJ
	- c. Function-Argument Biuniqueness: Every a-structure role must be associated to exactly one function and vice versa.

For the argument structure in (30), the principle in (31a) ensures that the agent x receives the grammatical function SUBJ. (31b) adds an o-feature with the value '+' so that the patient y is associated with OBJ:

$$\begin{array}{c c c} \text{(32)} & \text{Agent} & \text{Agent} \\ \text{a-structure} & \text{schlagen 'beat'} & \text{'x} & \text{y} \\ & & \text{[-o]} & \text{[-r]} \\ \hline \text{SUBJ} & \text{OBJ} & \\ \end{array}$$

Under passivization, the most prominent role is suppressed so that only the [−r] marked patient role remains. Following (31a), this role will then be mapped to the subject.

$$\begin{array}{c c c} \text{(33)} & \text{Agent} & \text{Agent} \\ \text{a-structure} & \text{schlagen 'beat'} & \text{(\$\mathbf{x}\$} & \text{y}\$ \\ & \begin{array}{c} [-\mathbf{o}] & [-\mathbf{r}] \\ \hline \mathcal{O} & \text{SUBJ} \end{array} \end{array}$$

Unlike the objects of transitive verbs, the objects of verbs such as *helfen* 'help' are marked as [+o] (Berman 1999). The lexical case of the objects is given in the a-structure, since this case (dative) is linked to a semantic role (Zaenen, Maling & Thráinsson 1985: 465). The corresponding semantic roles are obligatorily mapped to the grammatical function OBJ .

$$\begin{array}{cccc} \text{(34)} & & \text{Agent} & \text{Beneficiary} \\ \text{a-structure} & \text{helfen 'help'} & \text{(\$\times\$ y \$)} & \\ & & [\text{-o}] & [\text{+o}]/\text{DAT} \\ & & \text{SUBJ} & \text{OBJ}\_{\theta} \end{array}$$

Passivization will yield the following:

$$\begin{array}{cccc} \text{(35)} & & \text{Agent} & \text{Beneficary} \\ \text{a-structure} & \text{helfen 'help'} & \text{(\$\times\$ y \$)} \\ & & \begin{bmatrix} \text{[-o]} & \text{[+o]}/\text{DAT} \\ \text{\mathcal{D}} & \text{OBJ}\_{\theta} \end{bmatrix} \end{array}$$

Since there is neither a [−o] nor a [−r] argument, no argument is connected to the subject function. The result is an association of arguments and grammatical functions that corresponds to the one found in impersonal passives.

These mapping principles may seem complex at first glance, but they play a role in analyzing an entire range of phenomena, e.g., the analysis of unaccusative verbs (Bresnan & Zaenen 1990). For the analysis of the passive, we can now say that the passive suppresses the highest [−o] role. Mentioning an eventual object in the passive rule is no longer necessary.

# **7.3 Verb position**

There are two possibilities for the analysis of verb placement in German.


In the analysis of extended head domains, the verb is simply omitted from the verb phrase. The following preliminary variant of the VP rule is used:<sup>7</sup>

(36) VP → NP\* (V)

All components of the VP are optional as indicated by the brackets and by the Kleene star. The Kleene star stands for arbitrarily many occurrences of a symbol. This also includes zero occurrences. As in GB analyses, the verb in verb-first clauses is in C. No I projection is assumed – as in a number of GB works (Haider 1993, 1995, 1997a; Sternefeld 2006: Section IV.3), since it is difficult to motivate its existence for German (Berman 2003a: Section 3.2.2). The verb contributes its f-structure information from the C position. Figure 7.2 on the facing page contains a simplified version of the analysis proposed by Berman (2003a: 41).

<sup>7</sup> See Bresnan (2001: 110), Zaenen & Kaplan (2002: 413) and Dalrymple (2006: 84) for corresponding rules with optional constituents on the right-hand side of the rule. Zaenen & Kaplan (2002: 413) suggest a rule that is similar to (36) for German.

Figure 7.2: Analysis of verb placement following Berman (2003a: 41)

After what we learned about phrase structure rules in Chapters 2 and 5, it may seem strange to allow VPs without V. This is not a problem in LFG, however, since for the analysis of a given sentence, it only has to be ensured that all the necessary parts (and only these) are present. This is ensured by the constraints on completeness and coherence. Where exactly the information comes from is not important. In Figure 7.2, the verb information does not come from the VP, but rather from the C node. C′ is licensed by a special rule:

$$\begin{array}{ccccc} \text{(37)} & \text{C}^{\prime} & \rightarrow & \text{C}^{\prime} & \text{VP} \\ & & \uparrow = \downarrow & \uparrow = \downarrow \end{array}$$

In LFG rules, there is normally only one element annotated with '↑ = ↓', namely the head. In (37), there are two such elements, which is why both equally contribute to the fstructure of the mother. The head domain of V has been extended to C. The information about SUBJ and OBJ comes from the VP and the information about PRED from C.

# **7.4 Local reordering**

Two possibilities for treating local reordering have been discussed in the literature:<sup>8</sup>


<sup>8</sup>Kaplan (1989: 20–21) shows how one can write grammars in the ID/LP format in LFG. A GPSG-like analysis of German constituent order has not been proposed in the LFG framework.

#### 7 Lexical Functional Grammar

If one assumes that traces are relevant for the semantic interpretation of a given structure, then the first option has the same problems as movement-based GB analyses. These have already been discussed in Section 3.5.

In what follows, I will present the analysis proposed by Berman (1996: Section 2.1.3) in a somewhat simplified form. Case and grammatical functions of verbal arguments are determined in the lexicon (Berman 1996: 22). (38) shows the lexical entry for the verb *verschlingen* 'devour':9,<sup>10</sup>


<sup>9</sup> The four cases in German can be represented using two binary features (GOV, OBL) (Berman 1996: 22). Nominative corresponds to GOV− and OBL− and accusative to GOV+ and OBL−. This kind of encoding allows one to leave case partially underspecified. If one does not provide a value for GOV, then an element with OBL− is compatible with both nominative and accusative. Since this underspecification is not needed in the following discussion, I will omit this feature decomposition and insert the case values directly.

(i) (↓ CASE) = ACC ⇒ (↑ OBJ) = ↓

Karttunen (1989: Section 2.1) makes a similar suggestion for Finnish in the framework of Categorial Grammar. Such analyses are not entirely unproblematic as case cannot always be reliably paired with grammatical functions. In German, there are temporal accusatives (ii.a), as well as verbs with two accusative objects (ii.b–c) and predicative accusatives (ii.d).

	- b. Er he lehrte taught ihn him.acc den the.acc Ententanz. duck.dance
	- c. Das that kostet costs ihn him.acc einen a.acc Taler. taler
	- d. Sie she nannte called ihn him.acc einen a.acc Lügner. liar

All of these accusatives can occur in long-distance dependencies (see Section 7.5):

(iii) Wen who glaubst believe du, you dass that ich I getroffen met habe. have 'Who do you think I met?'

*wen* is not the object of *glauben* 'believe' and as such cannot be included in the f-structure of *glauben* 'believe'. One would have to reformulate the implication in (i) as a disjunction of all possible grammatical functions of the accusative and in addition account for the fact that accusatives can come from a more deeply embedded f-structure.

Bresnan (2001: 202) assumes that nonlocal dependencies crossing a clause involve a gap in German. With such a gap, one can assume that case is only assigned locally within the verbal projection. In any case, one would have to distinguish several types of frontings in German and the specification of case/grammatical function interaction would be much more complicate than (i).

<sup>10</sup>Alternative analyses derive the grammatical function of an NP from its case (Berman 2003a: 37 for German; Bresnan 2001: 187, 201 for German and Russian).

Berman proposes an analysis that does not combine the verb with all its arguments and adjuncts at the same time, as was the case in GPSG. Instead, she chooses the other extreme, assuming that the verb is not combined with an adjunct or an argument, but rather forms a VP directly. The rule for this is shown in (39):

$$\begin{array}{rcl} \text{(39)} & \text{VP} & \rightarrow & \text{(V)}\\ & & \uparrow = \downarrow \end{array}$$

At first sight, this may seem odd since a V such as *verschlingen* 'devour' does not have the same distribution as a verb with its arguments. However, one should recall that the constraints pertaining to coherence and completeness of f-structures play an important role so that the theory does not make incorrect predictions.

Since the verb can occur in initial position, it is marked as optional in the rule in (39) (see Section 7.3).

The following rule can be used additionally to combine the verb with its subject or object.

$$\begin{array}{rcl} \text{(40)} & \text{VP} & \rightarrow & \text{NP} & \text{VP} \\ & & \text{(\uparrow SUBJ|OBJ|OBJ\_\theta)} = \downarrow & \uparrow = \downarrow \end{array}$$

The '|' here stands for a disjunction, that is, the NP can be either the subject or the object of the superordinate f-structure. Since VP occurs both on the left and right-hand side of the rule in (40), it can be applied multiple times. The rule is not complete, however. For instance, one has to account for prepositional objects, for clausal arguments, for adjectival arguments and for adjuncts. See footnote 12 on page 243.

Figure 7.3 on the next page shows the analysis for (41a).

	- b. [dass] that den the Apfel apple David David verschlingt devours

The analysis of (41b) is shown in Figure 7.4 on the following page. The analysis of (41b) differs from the one of (41a) only in the order of the replacement of the NP node by the subject or object.

One further fact must be discussed: in rule (39), the verb is optional. If it is omitted, the VP is empty. In this way, the VP rule in (40) can have an empty VP on the right-hand side of the rule. This VP is also simply omitted even though the VP symbol in the righthand side of rule (40) is not marked as optional. That is, the corresponding symbol then also becomes optional as a result of taking the rest of the grammar into consideration as well as possible interactions with other rules.

# **7.5 Long-distance dependencies and functional uncertainty**

We have seen that LFG can explain phenomena such as passivization, local reordering as well as verb placement without transformations. In Chapter 5 on GPSG, we already

Figure 7.4: Analysis of OSV order following Berman (1996)

saw that the development of a transformation-less analysis for long-distance dependencies constitutes a real achievement. In LFG, Kaplan & Zaenen (1989) proposed another transformation-less analysis of long-distance dependencies, which we will consider in further detail in what follows.

In example (42), the displaced constituent *Chris* is characterized by two functions:

(42) Chris, we think that David saw.

For one, it has an argument function which is normally realized in a different position (the OBJ function of *saw* in the above example) and additionally it has a discourse function: a certain emphasis of the information-structural status in this construction (topic in the matrix clause). In LFG, topic and focus are assumed to be grammaticalized discourse functions (furthermore, subj is classified as the default discourse function). Only grammaticalized discourse functions are represented on the level of f-structure, that is,

those that are created by a fixed syntactic mechanism and that interact with the rest of the syntax.

Unlike argument functions, the discourse functions topic and focus are not lexically subcategorized and are therefore not subject to the completeness and coherence conditions. The values of discourse function features like topic and focus are identified with an f-structure that bears an argument function. (43) gives the f-structure for the sentence in (42):

The connecting line means that the value of topic is identical to the value of comp|obj. In Chapter 6 on feature descriptions, I used boxes for structure sharing rather than connecting lines, since boxes are more common across frameworks. It is possible to formulate the structure sharing in (43) as an f-structure constraint as in (44):

(44) (↑ topic) = (↑ comp obj)

Fronting operations like in (42) are possible from various levels of embedding: for instance, (45a) shows an example with less embedding. The object is located in the same fstructure as the topic. However, the object in (42) comes from a clause embedded under *think*.

The f-structure corresponding to (45a) is given in (45b):

The identity restriction for topic and object can be formulated in this case as in (46):

$$(46) \quad \text{(\uparrow ropic)} = \text{(\uparrow ono)}$$

Example (47a) shows a case of even deeper embedding than in (42) and (47b,c) show the corresponding f-structure and the respective restriction.

#### 7 Lexical Functional Grammar

The restrictions in (44), (46) and (47c) are c-structure constraints. The combination of a c-structure with (44) is given in (48):

$$\begin{array}{ccccc} \text{(48)} & \text{CP} & \rightarrow & \text{XP} & \text{C}'\\ & & \text{(\uparrow \text{ торгс})} = \downarrow & & \uparrow = \downarrow \\ & & \text{(\uparrow \text{ торгс})} = \text{(\uparrow \text{ сморг} \text{ ов3})} \end{array}$$

(48) states that the first constituent contributes to the topic value in the f-structure of the mother and furthermore that this topic value has to be identical to that of the object in the complement clause. We have also seen examples of other embeddings of various depths. We therefore need restrictions of the following kind as in (49):

$$\begin{aligned} \text{(49)} \quad & \text{a. } \text{(\uparrow \text{ ropric}) = (\uparrow \text{ oByj})}\\ & \text{b. } \text{(\uparrow \text{ ropric}) = (\uparrow \text{ coMP oByj})}\\ & \text{c. } \text{(\uparrow \text{ ropric}) = (\uparrow \text{ coMP } \text{comp oByj})}\\ & \text{d. } \dots \end{aligned}$$

The generalization emerging from these equations is given in (50):

$$(\mathbf{50}) \quad \text{(\uparrow ropric)} = \text{(\uparrow coMP\* oBy)}$$

Here, '\*' stands for an unrestricted number of occurrences of COMP. This means of leaving the possible identification of discourse and grammatical function open is known as *functional uncertainty*, see Kaplan & Zaenen (1989).

As was shown in the discussion of examples (2) and (3) on page 226, it is not the case that only a topic can be placed in the specifier position of CP in English as a focus can occur there too. One can use disjunctions in LFG equations and express the corresponding condition as follows:

```
(51) (↑ topic|focus) = (↑ comp* obj)
```
One can introduce a special symbol for topic|focus, which stands for a disjunction of discourse functions: df. (51) can then be abbreviated as in (52):

(52) (↑ df) = (↑ comp\* obj)

The final version of the c-structure rule for fronting in English will therefore have the form of (53):<sup>11</sup>

$$\begin{array}{rcl} \text{(53)} & \text{CP} & \rightarrow & \text{XP} \\ & & \text{(\uparrow \text{DF})}=\downarrow & & \uparrow = \downarrow \\ & & \text{(\uparrow \text{DF})=\langle \uparrow \text{COMP}^\* \text{ OBJ} \rangle} \end{array} \qquad \begin{array}{rcl} \text{\textbf{C}'} & & \text{C}' \\ & & \uparrow = \downarrow \\ & & \uparrow = \downarrow \end{array}$$

In German, objects as well as nearly any other constituent (e.g., subjects, sentential complements, adjuncts) can be fronted. The c-structure rule for this is shown in (54):<sup>12</sup>

$$\begin{array}{rcl} \text{(54)} & \text{CP} & \rightarrow & \text{XP} \\ & & \text{(\uparrow DF)} = \downarrow \\ & & \text{(\uparrow DF)} = \text{(\uparrow COMP\* GF)} \end{array} \qquad \begin{array}{rcl} \text{C}' & & \text{C}' \\ & & \uparrow = \downarrow \\ & & \uparrow = \downarrow \end{array}$$

Here, gf is an abbreviation for a disjunction of grammatical functions which can occur in the prefield. Figure 7.5 shows the analysis of the sentence in (55):

(55) Den the.acc Apfel apple verschlingt devours David. David.nom 'David is devouring the apple.'

Neither the finite verb nor the object *den Apfel* 'the apple' is realized within the VP. The finite verb is realized in C and contributes its f-structure information as a co-head to the VP to the VP f-structure. The NP in the prefield adds information to the f-structure of the sentence under topic, which is one of the options to resolve df, and the topic value is identified with the obj grammatical function via the functional uncertainty equation (↑ df) = (↑ comp\* gf).

# **7.6 Summary and classification**

This section is for advanced readers. It compares LFG with other theories introduced in the book. So I suggest coming back here after reading Chapters 8–12.

LFG is a constraint-based theory and utilizes feature descriptions and PSG rules. Grammatical functions are treated as primitives of the theory, which sets LFG apart from most of the other theories covered in this book. They are not defined structurally (as in GB).

<sup>11</sup>Note that the two disjunctions that are abbreviated by the respective occurrences of df are independent in principle. This is unwanted. We want to talk about either a topic or a focus not about a topic and a focus in the mother f-structure. So additional machinery is needed to ensure that both occurrences of df refer to the same discourse function.

<sup>12</sup>Berman (1996) uses the symbol ZP for symbols in the prefield rather than XP in (54). She formulates various phrase structure rules for ZPs, which replace ZP with NP, PP, AP and various adjuncts. Following Berman, ZPs can also be combined with the verb in the middle field. For reasons of exposition, I refrained from using ZP symbols in the formulation of the VP rule (40) in Section 7.4 and instead used NP directly.

Figure 7.5: Analysis of verb second

LFG is a lexicalist theory. Like GPSG, LFG can do without transformations. Processes affecting argument structure such as passivization are analyzed by means of lexical rules. Whereas GPSG treats long-distance dependencies using the percolation of information in trees, LFG uses functional uncertainty: a part of the f-structure is identified with another f-structure that can be embedded to an arbitrary depth. Coherence and completeness ensure that the long-distance dependency can be correctly resolved, that is, it ensures that a fronted object is not assigned to an f-structure which already contains an object or one in which no object may occur.

While LFG does contain a phrase-structural component, this plays a significantly less important role compared to other models of grammar. There are rules in which all constituents are optional and it has even been proposed for some languages that there are rules where the part of speech of the constituents is not specified (see Section 13.1.2). In these kinds of grammars, f-structure, coherence and completeness work together to ensure that the grammar only allows well-formed structures.

LFG differs from other theories such as HPSG and variants of Construction Grammar in that feature structures are untyped. Generalizations can therefore not be represented in type hierarchies. Until a few years ago, the hierarchical organization of knowledge in inheritance hierarchies did not form part of theoretical analyses. In computer implementations, there were macros but these were viewed as abbreviations without any theoretical status. It is possible to organize macros into hierarchies and macros were discussed explicitly in Dalrymple, Kaplan & King (2004) with reference to capturing linguistic generalizations. Asudeh, Dalrymple & Toivonen (2008) suggest using macros not only for the organization of lexical items but also for capturing generalizations regarding c-structure annotations. Because of these developments, there was a greater convergence between LFG and other theories such as HPSG and CxG.

Williams (1984) compares analyses in LFG with GB. He shows that many analyses are in fact transferable: the function that f-structure has in LFG is handled by the Theta-Criterion and Case Theory in GB. LFG can explicitly differentiate between subjects and non-subjects. In GB, on the other hand, a clear distinction is made between external and internal arguments (see Williams 1984: Section 1.2). In some variants of GB, as well as in HPSG and CxG, the argument with subject properties (if there is one) is marked explicitly (Haider 1986a, Heinz & Matiasek 1994, Müller 2003b, Michaelis & Ruppenhofer 2001). This special argument is referred to as the *designated argument*. In infinitival constructions, subjects are often not expressed inside the infinitival phrase. Nevertheless, the unexpressed subject is usually coreferential with an argument of the matrix verb:

	- b. Er he zwingt forces ihn, him [das the Buch book zu to lesen]. read 'He is forcing him to read the book.´

This is a fact that every theory needs to be able to capture, that is, every theory must be able to differentiate between subjects and non-subjects.

For a comparison of GB/Minimalism and LFG/HPSG, see Kuhn (2007).

# **Comprehension questions**


# **Exercises**


(57) Dem the.dat Kind child hilft helps Sandy. Sandy.nom 'Sandy helps the child.'

Provide the necessary c-structure rules. What kind of f-structure is licensed? Draw a syntactic tree with corresponding references to the f-structure. For fronted constituents, simply write NP rather than expanding the XP node. The c-structure rule for the NP can also be omitted and a triangle can be drawn in the tree.

### **Further reading**

Section 7.1 was based extensively on the textbook and introductory article of Dalrymple (2001, 2006). Additionally, I have drawn from teaching materials of Jonas Kuhn from 2007. Bresnan (2001) is a comprehensive textbook in English for the advanced reader. Some of the more in-depth analyses of German in LFG are Berman (1996, 2003a). Schwarze & Alencar (2016) is an introduction to LFG that uses French examples. The authors demonstrate how the XLE system can be used for the development of a French LFG grammar. The textbook also discusses the Finite State Morphology component that comes with the XLE system. Dalrymple (2023) is a large handbook on LFG research covering various syntactic phenomena, various descriptive levels and also other subdisciplines of linguistics.

Levelt (1989) developed a model of language production based on LFG. Pinker (1984) – one of the best-known researchers on language acquisition – used LFG as the model for his theory of acquisition. For another theory on first and second language acquisition that uses LFG, see Pienemann (2005).

Müller (2018a) discusses recent phrasal approaches to argument structure constructions in LFG, argues for a lexical treatment and provides such a treatment in HPSG.

Wechsler & Asudeh (2021) and Przepiórkowski (2023) compare LFG with HPSG. The first reference is a chapter in the HPSG handbook published by Language Science Press and the second one is a chapter in the LFG handbook.

# **8 Categorial Grammar**

Categorial Grammar is the second oldest of the approaches discussed in this book. It was developed in the 30s by the Polish logician Kazimierz Ajdukiewicz (Ajdukiewicz 1935). Since syntactic and semantic descriptions are tightly connected and all syntactic combinations correspond to semantic ones, Categorial Grammar is popular amongst logicians and semanticists. Some stellar works in the field of semantics making use of Categorial Grammar are those of Richard Montague (1974). Other important works come from David Dowty in Columbus, Ohio (1979), Michael Moortgat in Utrecht (1989), Glyn Morrill in Barcelona (1994), Bob Carpenter in New York (1998) and Mark Steedman in Edinburgh (1991, 1996, 2000). A large fragment for German using Montague Grammar has been developed by von Stechow (1979). The 2569-page grammar of the *Institut für Deutsche Sprache* in Mannheim (Zifonun, Hoffmann & Strecker 1997) contains Categorial Grammar analyses in the relevant chapters. Fanselow (1981) worked on morphology in the framework of Montague Grammar. Uszkoreit (1986a), Karttunen (1986, 1989) and Calder, Klein & Zeevat (1988) developed combinations of unification-based approaches and Categorial Grammar.

The basic operations for combining linguistic objects are rather simple and well-understood so that it is no surprise that there are many systems for the development and processing of Categorial Grammars (Yampol & Karttunen 1990, Carpenter 1994, Bouma & van Noord 1994, Lloré 1995, König 1999, Moot 2002, White & Baldridge 2003, Baldridge, Chatterjee, Palmer & Wing 2007, Morrill 2012, 2017). An important contribution has been made by Mark Steedman's group (see for instance Clark, Hockenmaier & Steedman 2002, Clark & Curran 2007).

Implemented fragments exist for the following languages:


#### 8 Categorial Grammar

In addition, Baldridge, Chatterjee, Palmer & Wing (2007: 15) mention an implementation for Classical Arabic.

Some of the systems for the processing of Categorial Grammars have been augmented by probabilistic components so that the processing is robust (Osborne & Briscoe 1997, Clark, Hockenmaier & Steedman 2002). Some systems can derive lexical items from corpora, and Briscoe (2000) and Villavicencio (2002) use statistical information in their UG-based language acquisition models.

# **8.1 General remarks on the representational format**

In what follows I introduce some basic assumptions of Categorial Grammar. After these introductory remarks, I will discuss specific analyses that were developed by Steedman (1996) in the framework of Combinatory Categorial Grammar. There are other variants of Categorial Grammar as for instance type-logical CG, the variety espoused by Morrill (1994), Dowty (1997), Moortgat (2011), and others, which cannot be discussed here.

## **8.1.1 Representation of valence information**

In Categorial Grammar, complex categories replace the subcat feature that is used in GPSG to ensure that a head can only be used with suitable grammatical rules. Simple phrase structure rules can be replaced with complex categories as follows:


vp/np stands for something that needs an np in order for it to form a vp.

In Categorial Grammar, there are only a few very abstract rules. One of these is forward application, also referred to as the multiplication rule:

(2) forward application: X/Y ∗ Y = X

This rule combines an X looking for a Y with a Y and requires that Y occurs to the right of X/Y. The result of this combination is an X that no longer requires a Y. X/Y is called the *functor* and Y is the *argument* of the functor.

As in GB theory, valence is encoded only once in Categorial Grammar, in the lexicon. In GPSG, valence information was present in grammatical rules and in the subcat feature of the lexical entry.

Figure 8.1 on the next page shows how a lexical entry for a transitive verb is combined with its object. A derivation in CG is basically a binary branching tree; it is, however, mostly represented as follows: an arrow under a pair of categories indicates that these have been combined via a combinatorial rule. The direction of this arrow indicates the direction of this combination. The result is given beneath the arrow. Figure 8.2 on the facing page shows the tree corresponding to Figure 8.1.

$$\frac{\frac{chased}{vp/np} \quad \frac{Mary}{np}}{vp}$$

Figure 8.1: Combination of a verb and its object (preliminary)

Figure 8.2: Derivation in Figure 8.1 as a tree diagram

One usually assumes left associativity for '/'; that is, (vp/pp)/np = vp/pp/np.

If we look at the lexical entries in (1), it becomes apparent that the category v does not appear. The lexicon only determines what the product of combination of a lexical entry with its arguments is. The symbol for vp can also be eliminated: an (English) vp is something that requires an NP to its left in order to form a complete sentence. This can be represented as s\np. Using the rule for backward application, it is possible to compute derivations such as the one in Figure 8.3.

(3) Backward application: Y ∗ X\Y = X


Figure 8.3: Analysis of a sentence with a transitive verb

In Categorial Grammar, there is no explicit difference made between phrases and words: an intransitive verb is described in the same way as a verb phrase with an object: s\np. Equally, proper nouns are complete noun phrases, which are assigned the symbol np.

### **8.1.2 Semantics**

As already mentioned, Categorial Grammar is particularly popular among semanticists as syntactic combinations always result in parallel semantic combinations and even for complex combinations such as those we will discuss in more detail in the following sections, there is a precise definition of meaning composition. In the following, we will take a closer look at the representational format discussed in Steedman (1996: Section 2.1.2).

#### 8 Categorial Grammar

Steedman proposes the following lexical entry for the verb *eats*: 1

(4) eats := (s: *eat*′ (x, y)\np3S:x)/np:y

In (4), the meaning of each category is given after the colon. Since nothing is known about the meaning of the arguments in the lexical entry of *eat*, the meaning is represented by the variables and . When the verb combines with an NP, the denotation of the NP is inserted. An example is given in (5):<sup>2</sup>

(5) ( : ′ (, )\3 : )/ : : ′ > : ′ (, ′ )\3 :

When combining a functor with an argument, it must be ensured that the argument fits the functor, that is, it must be unifiable with it (for more on unification see Section 6.6). The unification of np:y with np: *apples*′ results in np: *apples*′ since *apples*′ is more specific than the variable y. Apart from its occurrence in the term np:y, y occurs in the description of the verb in another position (s: *eat*′ (x, y)\np3S:x) and therefore also receives the value *apples*′ there. Thus, the result of this combination is s: *eat*′ (x, *apples*′ )\np3 :x as shown in (5).

Steedman notes that this notation becomes less readable with more complex derivations and instead uses the more standard -notation:

(6) eats := (s\np3S)/np: ..*eat*′ (, )

Lambdas are used to allow access to open positions in complex semantic representations (see Section 2.3). A semantic representation such as ..*eat*′ (, ) can be combined with the representation of *apples* by removing the first lambda expression and inserting the denotation of *apples* in all the positions where the corresponding variable (in this case, y) appears (see Section 2.3 for more on this point):

(7) ..′ (, ) *apples*′ .′ (, ′ )

This removal of lambda expressions is called -reduction.

If we use the notation in (6), the combinatorial rules must be modified as follows:

(8) X/Y:f \* Y:a = X: f a Y:a \* X\Y:f = X: f a

In such rules, the semantic contribution of the argument (a) is written after the semantic denotation of the functor (f). The open positions in the denotation of the functor are represented using lambdas. The argument can be combined with the first lambda expression using -reduction.

Figure 8.4 on the facing page shows the derivation of a simple sentence with a transitive verb. After forward and backward application, -reduction is immediately applied.

<sup>1</sup> I have adapted his notation to correspond to the one used in this book.

<sup>2</sup> The assumption that *apples* means *apples*′ and not *apples*′ (z) minus the quantifier contribution is a simplification here.


Figure 8.4: Meaning composition in Categorial Grammar

### **8.1.3 Adjuncts**

As noted in Section 1.6, adjuncts are optional. In phrase structure grammars, this can be captured, for example, by rules that have a certain element (for instance a VP) on the left-hand side of the rule and the same element and an adjunct on the right-hand side of the rule. Since the symbol on the left is the same as the one on the right, this rule can be applied arbitrarily many times. (9) shows some examples of this:

(9) a. VP → VP PP b. Noun → Noun PP

One can analyze an arbitrary amount of PPs following a VP or noun using these rules.

In Categorial Grammar, adjuncts have the following general form: X\X or X/X. Adjectives are modifiers, which must occur before the noun. They have the category n/n. Modifiers occurring after nouns (prepositional phrases and relative clauses) have the category n\n instead.<sup>3</sup> For VP-modifiers, X is replaced by the symbol for the VP (s\np) and this yields the relatively complex expression (s\np)\(s\np). Adverbials in English are VP-modifiers and have this category. Prepositions that can be used in a PP modifying a verb require an NP in order to form a complete PP and therefore have the category ((s\np)\(s\np))/np. Figure 8.5 on the next page gives an example of an adverb (*quickly*) and a preposition (*round*). Note that the result of the combination of *round* and *the garden* corresponds to the category of the adverb (s\np)\(s\np). In GB theory, adverbs and prepositions were also placed into a single class (see page 94). This overarching class was then divided into subclasses based on the valence of the elements in question.

# **8.2 Passive**

In Categorial Grammar, the passive is analyzed by means of lexical rule (Dowty 1978: 412; Dowty 2003: Section 3.4). (10) shows the rule in Dowty (2003: 49).

(10) Syntax: ∈ (s\np)/np → PST-PART() ∈ PstP/np Semantics: ′ → ′ () ()

<sup>3</sup> In Categorial Grammar, there is no category symbol like X for intermediate projections of X theory. So, rather than assuming N/N, CG uses n/n. See Exercise 2.


Figure 8.5: Example of an analysis with adjuncts in Categorial Grammar

Here, PstP stands for past participle and np is an abbreviation for a verb phrase modifier of the form vp\vp or rather (s\np)\(s\np). The rule says the following: if a word belongs to the set of words with the category (s\np)/np, then the word with past participle morphology also belongs in the set of words with the category PstP/np.

(11a) shows the lexical entry for the transitive verb *touch* and (11b) the result of rule application:

	- b. touched: PstP/np

The auxiliary *was* has the category (s\np)/PstP and the preposition *by* has the category np/np, or its unabbreviated form ((s\np)\(s\np))/np. In this way, (12) can be analyzed as in Figure 8.6.

(12) John was touched by Mary.


Figure 8.6: Analysis of the passive using a lexical rule

The question as to how to analyze the pair of sentences in (13) still remains unanswered.<sup>4</sup>

	- b. The book was given to Mary.

*gave* has the category ((s\np)/pp)/np, that is, the verb must first combine with an NP (*the book*) and a PP (*to Mary*) before it can be combined with the subject. The problem

<sup>4</sup> Thanks to Roland Schäfer (p.c., 2009) for pointing out these data to me.

is that the rule in (10) cannot be applied to *gave* with a *to*-PP since the pp argument is sandwiched between both np arguments in ((s\np)/pp)/np. One would have to generalize the rule in (10) somehow by introducing new technical means<sup>5</sup> or assume additional rules for cases such as (13b).

# **8.3 Verb position**

Steedman (2000: 159) proposed an analysis with variable branching for Dutch, that is, there are two lexical entries for *at* 'eat': an initial one with its arguments to the right, and another occupying final position with its arguments to its left.

	- b. *at* 'eat' in verb-initial position: (s−SUB/np)/np

Steedman uses the feature sub to differentiate between subordinate and non-subordinate sentences. Both lexical items are related via lexical rules.

One should note here that the NPs are combined with the verb in different orders. The normal order is:

	- b. in verb-initial position: (s−SUB/np[acc])/np[nom]

The corresponding derivations for German sentences with a bivalent verb are shown in Figures 8.7 and 8.8.


Figure 8.7: Analysis of verb-final sentences following Steedman


Figure 8.8: Analysis of verb-initial sentences following Steedman

In Figure 8.7, the verb is first combined with an accusative object, whereas in Figure 8.8, the verb is first combined with the subject. For criticism of these kinds of analyses with variable branching, see Netter (1992) and Müller (2005b, 2023a).

<sup>5</sup> Baldridge (p.c., 2010) suggests using regular expressions in a general lexical rule for passive.

#### 8 Categorial Grammar

Jacobs (1991) developed an analysis which corresponds to the verb movement analysis in GB. He assumes verb-final structures, that is, there is a lexical entry for verbs where arguments are selected to the left of the verb. A transitive verb would therefore have the entry in (16a). Additionally, there is a trace in verb-final position that requires the arguments of the verb and the verb itself in initial position. (16b) shows what the verb trace looks like for a transitive verb in initial position:

	- b. Verb trace for the analysis of verb-first: ((s\((s\np[nom])\np[acc]))\np[nom])\np[acc]

The entry for the verb trace is very complex. It is probably simpler to examine the analysis in Figure 8.9.


Figure 8.9: Analysis of verb-initial sentences following Jacobs (1991)

The trace is the head in the entire analysis: it is first combined with the accusative object and then with the subject. In a final step, it is combined with the transitive verb in initial-position.<sup>6</sup> A problem with this kind of analysis is that the verb *isst* 'eats', as well as *er* 'he' and *ihn* 'him'/'it', are arguments of the verb trace in (17).

(17) Morgen tomorrow [isst eats [er he [ihn him \_]]] 'He will eat it/him tomorrow.'

Since adjuncts can occur before, after or between arguments of the verb in German, one would expect that *morgen* 'tomorrow' can occur before the verb *isst*, since *isst* is just a normal argument of the verbal trace in final position. As adjuncts do not change the categorial status of a projection, the phrase *morgen isst er ihn* 'He will eat it/him tomorrow.' should be able to occur in the same positions as *isst er ihn*. This is not the case, however. If we replace *isst er ihn* by *morgen isst er ihn* in (18a), the result is (18b), which is ungrammatical.

	- b. \* Deshalb therefore morgen tomorrow isst eats er he ihn. him

<sup>6</sup> See Netter (1992) for a similar analysis in HPSG.

An approach which avoids this problem comes from Kiss & Wesche (1991) (see Section 9.3). Here, the authors assume that there is a verb in initial position which selects a projection of the verb trace. If adverbials are only combined with verbs in final-position, then a direct combination of *morgen* 'tomorrow' and *isst er ihn* 'he is eating it/him' is ruled out. If one assumes that the verb in first-position is the functor, then it is possible to capture the parallels between complementizers and verbs in initial position (Höhle 1997): finite verbs in initial position differ from complementizers only in requiring a projection of a verb trace, whereas complementizers require projections of overt verbs:

	- b. Isst eats [er he ihn him \_ ]

This description of verb position in German captures the central insights of the GB analysis in Section 3.2.

# **8.4 Local reordering**

Up to now, we have seen combinations of functors and arguments where the arguments were either to the left or to the right of the functor. The saturation of arguments always took place in a fixed order: the argument furthest to the right was combined first with the functor, e.g., (s\np)/pp first combined with the PP, and the result of this combination was combined with the NP.

There are a number of possibilities to analyze ordering variants in German: Uszkoreit (1986b) suggests accounting for possible orders lexically; that is, that each possible order corresponds to a lexical item. One would therefore have at least six lexical items for a ditransitive verb. Briscoe (2000: 257) and Villavicencio (2002: 96–98) propose a variant of this analysis where the order of arguments is modified in the syntax: a syntactic rule can, for example, change the order (S/PRT)/NP into (S/NP)/PRT.

A different approach is suggested by Steedman & Baldridge (2006). They discuss various options for ordering arguments attested in the languages of the world. This includes languages in which the order of combination is free as well as languages where the direction of combination is free. Steedman and Baldridge introduce the following convention for representing categories: elements in curly brackets can be discharged in any order. '|' in place of '\' or '/' serves to indicate that the direction of combination is free. Some prototypical examples are shown in (20):


Hoffman (1995: Section 3.1) has proposed an analysis analogous to that of Japanese for Turkish and this could also be used in conjunction with an analysis of verb position for German. This would correspond to the GB/MP analysis of Fanselow (2001) or the HPSG analysis presented in Section 9.4.

# **8.5 Long-distance dependencies**

Steedman (1989a: Section 1.2.4) proposes an analysis of long-distance dependencies without movement or empty elements. For examples such as (21), he assumes that the category of *Harry must have been eating* or *Harry devours* is s/np.

	- b. apples which Harry devours

The fronted NP *these apples* and the relative pronoun *which* are both functors in the analysis of (21) which take s/np as their argument. Using the machinery introduced up to now, we cannot assign the category s/np to the strings *Harry must have been eating* and *Harry devours* in (21) although it is intuitively the case that *Harry devours* is a sentence missing an NP. We still require two further extensions of Categorial Grammar: type raising and forward and backward composition. Both of these operations will be introduced in the following sections.

# **8.5.1 Type Raising**

The category np can be transformed into the category s/(s\np) by *type raising*. If we combine this category with s\np, then we get the same result as if we had combined np and s\np with the backward application rule in (3). (22a) shows the combination of an NP with a VP (a sentence missing an NP to its left). The combination of the type-raised NP with the VP is given in (22b).

(22) a. np ∗ s\np = s b. s/(s\np) ∗ s\np = s

In (22a), a verb or verb phrase selects an NP to its left (s\np). In (22b), an NP having undergone type raising selects a verb or verb phrase to its right which requires an NP to its left (s\np).

Type raising simply reverses the direction of selection: the VP in (22a) is the functor and the NP is the argument, whereas in (22b), it is the type raised NP which acts as the functor, and the VP is the argument. In each case, the result of the combination is the same. This change of selectional direction may just seem like a trick at first glance, but as we will see, this trick can be extremely useful. First, however, we will introduce forward and backward composition.

# **8.5.2 Forward and backward composition**

(23) shows the rules for forward and backward composition.

	- b. Backward composition (< B) Y\Z ∗ X\Y = X\Z

These rules will be explained using forward composition as an example. (23a) can be understood as follows: X/Y more or less means; if I find a Y, then I am a complete X. In the combinatorial rule, X/Y is combined with Y/Z. Y/Z stands for a Y that is not yet complete and is still missing a Z. The requirement that Y must find a Z in order to be complete is postponed: we pretend that Y is complete and use it anyway, but we still bear in mind that something is actually still missing. Hence, if we combine X/Y with Y/Z, we get something which becomes an X when combined with a Z.

## **8.5.3 Analysis of long-distance dependencies**

By using forward composition, we can assign *Harry must have been eating* the category s/np. Figure 8.10 shows how this works. *must* is a verb which requires an unmarked


Figure 8.10: Application of forward composition to VP-chains

infinitive form, *have* requires a participle and *been* must combine with a present participle. In the above figure, the arrow with a small 'T' stands for type raising, whereas the arrows with a 'B' indicate composition. The direction of composition is shown by the direction of the arrow.

For the analysis of (21a), we are still missing one small detail, a rule that turns the NP at the beginning of the sentence into a functor which can be combined with s/np. Normal type raising cannot handle this because it would produce s/(s\np) when s/(s/np) is required.

Steedman (1989a: 217) suggests the rule in (24):

(24) Topicalization (↑): X ⇒ st/(s/X) where X ∈ { np, pp, vp, ap, s′

st stands for a particular type of sentence (s), namely one with topicalization (t). The ⇒ expresses that one can type raise any X into an st/(s/X).

}

If we replace X with np, we can turn *these apples* into st/(s/np) and complete the analysis of (21a) as shown in Figure 8.11 on the next page. The mechanism presented here will of course also work for dependencies that cross sentence boundaries. Figure 8.12 on the following page shows the analysis for (25):

(25) Apples, I believe that Harry eats.

#### 8 Categorial Grammar


Figure 8.11: Analysis of long-distance dependencies in Categorial Grammar


Figure 8.12: Analysis of long-distance dependencies across sentence boundaries

Using the previously described tools, it is, however, only possible to describe extractions where the fronted element in the sentence would have occurred at the right edge of the phrase without fronting. This means it is not possible to analyze sentences where the middle argument of a ditransitive verb has been extracted (Steedman 1985: 532). Pollard (1988: 406) provides the derivation in Figure 8.13 for (26).

(26) Fido we put downstairs.


Figure 8.13: Analysis of long-distance dependencies across sentence boundaries

In this analysis, it is not possible to combine *we* and *put* using the rule in (23a) since s\np is not directly accessible: breaking down ((s\np)/pp)/np into functor and argument gives us (s\np)/pp and np. In order to deal with such cases, we need another variant of composition:

(27) Forward composition for n=2 (> BB) X/Y ∗ (Y/Z1)/Z2 = (X/Z1)/Z2

With this addition, it is now possible to combine the type-raised *we* with *put*. The result is (s/pp)/np. The topicalization rule in (24), however, requires an element to the right of st with the form (s/X). This is not the case in Figure 8.13. For the NP *Fido*, we need a functor category which allows that the argument itself is complex. The rule which is needed for the case in (26) is given in (28).

(28) Topicalization for n=2 (↑↑): X2 ⇒ (st/X1)/((s/X1)/X2) where X1 and X2 ∈ { NP, PP, VP, AP, S′ }

If we assume that verbs can have up to four arguments (z. B. *buy*: buyer, seller, goods, price), then it would be necessary to assume a further rule for composition as well as another topicalization rule. Furthermore, one requires a topicalization rule for subject extraction (Pollard 1988: 405). Steedman has developed a notation which provides a compact notation of the previously discussed rules, but if one considers what exactly these representations stand for, one still arrives at the same number of rules that have been discussed here.

# **8.6 Summary and classification**

This section is for advanced readers. It discusses problems of the analysis of relative clauses. Readers who are mainly interested in an introduction to the core theory may skip it.

The operations of Combinatory Categorial Grammar, which go beyond those of standard Categorial Grammar, allow for so much flexibility that it is even possible to assign a category to sequences of words that would not normally be treated as a constituent. This is an advantage for the analysis of coordination (see Section 21.6.2) and, furthermore, Steedman (1991) has argued that intonation data support the constituent status of these strings. See also Section 15.2 for a direct model of incremental language processing in Categorial Grammar. In phrase structure grammars, it is possible to use GPSG mechanisms to pass information about relative pronouns contained in a phrase up the tree. These techniques are not used in CG and this leads to a large number of recategorization rules for topicalization and furthermore leads to inadequate analyses of pied-piping constructions in relative clauses. As the topicalization analysis was already discussed in Section 8.5, I will briefly elaborate on relative clauses here.

Steedman & Baldridge (2006: 614) present an analysis of long-distance dependencies using the following relative clause in (29):

(29) the man that Manny says Anna married

The relative pronoun is the object of *married* but occurs outside the clause *Anna married*. Steedman assumes the lexical entry in (30) for relative pronouns:

(30) (n\n)/(s/np)

#### 8 Categorial Grammar

This means the following: if there is a sentence missing an NP to the right of a relative pronoun, then the relative pronoun can form an N-modifier (n\n) with this sentence. The relative pronoun is the head (functor) in this analysis.

Utilizing both additional operations of type raising and composition, the examples with relative clauses can be analyzed as shown in Figure 8.14. The lexical entry for the


Figure 8.14: Categorial Grammar analysis of a relative clause with long-distance dependency

verbs corresponds to what was discussed in the preceding sections: *married* is a normal transitive verb and *says* is a verb that requires a sentential complement and forms a VP (s\np) with it. This VP yields a sentence when combined with an NP. The noun phrases in Figure 8.14 have been type raised. Using forward composition, it is possible to combine *Anna* and *married* to yield s/np. This is the desired result: a sentence missing an NP to its right. *Manny* and *says* and then *Manny says* and *Anna married* can also be combined via forward composition and we then have the category s/np for *Manny says Anna married*. This category can be combined with the relative pronoun using forward application and we then arrive at n\n, which is exactly the category for postnominal modifiers.

However, the assumption that the relative pronoun constitutes the head is problematic since one has to then go to some lengths to explain pied-piping constructions such as those in (31).

	- b. Reports [[the height of the lettering on the covers of which] the government prescribes] should be abolished.<sup>8</sup>

In (31), the relative pronoun is embedded in a phrase that has been extracted from the rest of the relative clause. The relative pronoun in (31a) is the determiner of *sermon*. Depending on the analysis, *whose* is the head of the phrase *whose sermon*. The NP is embedded under *of* and the phrase *of whose sermon* depends on *middle*. The entire NP *the middle of the sermon* is a complement of the preposition *in*. It would be quite a stretch to claim that *whose* is the head of the relative clause in (31a). The relative pronoun in (31b) is even more deeply embedded. Steedman (1996: 50) gives the following lexical entries for *who*, *whom* and *which*:

<sup>7</sup> Pollard & Sag (1994: 212).

<sup>8</sup> Ross (1967: 109).


Using (32b) and (32c), it is possible to analyze (33a) and (33b):

	- b. a subject on which Keats (expects that Chapman) will speak

In the analysis of (33b), *which* requires a preposition to its left (pp/np) so it can form the category (n\n)/(s/pp). This category needs a sentence lacking a PP to its right in order to form a post-nominal modifier (n\n). In the analysis of (33a), *the cover of* becomes np/np by means of composition and *which* with the lexical entry (32c) can combine with *the cover of* to its left. The result is the category (n\n)/(s/np), that is, something that requires a sentence missing an NP.

Ross' examples (31b) can also be analyzed as follows (32c):

(34) reports [the height of the lettering on the covers of]np/np which](n\n)/(s/np) the government prescribes

The complex expression *the height of the lettering on the covers of* becomes np/np after composition and the rest of the analysis proceeds as that of (33a).

In addition to entries such as those in (32), we also need further entries to analyze sentences such as (35), where the relative phrase has been extracted from the middle of the clause (see Pollard 1988: 410):

(35) Fido is the dog which we put downstairs.

The problem here is similar to what we saw with topicalization: *we put* does not have the cateory s/np but rather (s/pp)/np and as such, cannot be directly combined with the relative pronoun in (30).

Morrill (1995: 204) discusses the lexical entry in (32b) for the relative pronoun in (36):

(36) about which John talked

In the lexical entry (32b), *which* requires something to the left of it, which requires a noun phrase in order to form a complete prepositional phrase; that is, *which* selects a preposition. Morrill noted that there is a need to postulate further lexical items for cases like (37) in which the relative pronoun occurs in the middle of the fronted phrase.

(37) the contract [the loss of which after so much wrangling] John would finally have to pay for

These and other cases could be handled by additional lexical stipulations. Morrill instead proposes additional types of the combination of functors and arguments, which allow a functor B ↑ A to enclose its argument A and produce B, or a functor A ↓ B to enclose its argument to then yield B (p. 190). Even with these additional operations, he still needs the two lexical items in (38) for the derivation of a pied-piping construction with an argument NP or a PP:

(38) a. (NP ↑ NP) ↓ (N\N)/(S/NP) b. (PP ↑ NP) ↓ (N\N)/(S/PP)

These lexical items are still not enough, however, as (38b) contains a PP but this PP corresponds to an argument PP, which is required for (36). To analyze (31a), which involves a PP adjunct, we need to assume the category (s\np)/(s\np) for the prepositional phrase *in the middle of whose sermon*. We, therefore, also require at least three additional items for relative pronouns.

By introducing new operations, Morrill manages to reduce the number of lexical entries for *which*; however, the fact remains that he has to mention the categories which can occur in pied-piping constructions in the lexical entry of the relative pronoun.

Furthermore, the observation that relative clauses consist of a phrase with a relative pronoun plus a sentence missing a relative phrase is lost. This insight can be kept if one assumes a GPSG-style analysis where information about whether there is a relative pronoun in the relative phrase can be passed up to the highest node of the relative phrase. The relative clause can then be analyzed as the combination of a sentence with a gap and an appropriately marked relative phrase. For the discussion of such analyses in the framework of GB theory and HPSG/CxG, see Section 21.10.3.

Kubota (p.c. 2020) discussing this section and Kubota (2021) with me pointed out to me Sag's (1997: 455) claim that the relative phrases in English are restricted to be NPs or PPs. This information has to be encoded somewhere. Sag (1997: 455) encodes it on the phrasal level specifying in the phrasal schema for relative clauses that the relative phrase has to be an NP or a PP. Categorial Grammar being an extremely lexicalized theory has to express this constraint lexically. I think the claim that only NPs and PPs can be fillers in relative clauses is not correct for English and I will return to this question below. In any case this restriction does not hold for German since the relative phrase can be an NP, a PP, an adverb, even a verb phrase or adjectival phrase in German:


e. ein a Umstand, fact [den which zu to berücksichtigen] consider er he immer always vergißt,<sup>9</sup> forgets (VP)

'a fact which he always forgets to consider'

f. der the Mann, man [auf of *den* whom stolz] proud wohl part niemand nobody sein be würde would (AdjP) 'the man whom probably nobody would be proud of'

And these possibilities to have relative pronouns of different categories are not restricted to German.

The following examples are taken from Nanni & Stillings (1978: 311).

	- b. The elegant parties, [to be admitted to one of *which*] was a privilege, had usually been held at Delmonico's. (VP)

Nanni & Stillings (1978: 311–312) treat *compared* as adjective so the pied piped phrase would be an AP, but even if one rejects this categorization, examples like (41) involving clear adjectives are possible:

(41) this son [proud of *whom*] Peter always was

Huddleston et al. (2002: 1053) discuss the following examples with *when*, *why*, and *where*:

	- b. That's the reason [*why* she resigned].
	- c. This is much better than the hotel [*where* we stayed last year].

These elements are adverbs. It follows from the discussion of the examples in (42) and (41) that there should not be a restriction on the part of speech of the relative phrase to be noun or preposition. It might be useful to have such restrictions on some types of relative clauses, but it is not a restriction holding for all relative clauses. Hence, the schema for relative clauses of the kind discussed here would be (43) both for German and English:

(43) RC → XP[rel ⟨ *i* ⟩], S[slash ⟨ XP ⟩ ]

This captures the generalization expressed above that a relative clause consists of a phrase containing a relative pronoun and a sentence from which this relative phrase is extracted. The category of the relative phrase is determined by two things: it must be possible to extract the respective category from the sentence. So it must play a role within this sentences and its category and properties like case are determined within the sentences. And furthermore it must be possible to build a phrase of the respective category containing a relative pronoun. In the case of adverbial relative phrases, the relative

<sup>9</sup> (Bech 1955: 79). See Haider (1985a) and Müller (1999b: Section 10.7) for a discussion of pied-piping in relative clauses with fronted verbal projections.

#### 8 Categorial Grammar

phrase has to be the adverbial element. Since *how* does not function as relative element, it cannot appear as relative phrase. German has *wie* as relative element and hence we have examples like (39d).

# **Comprehension questions**


## **Exercises**

	- (44) The children in the room laugh loudly.
	- (45) the picture of Mary

Compare the resulting analysis with the structure given in Figure 2.4 on page 67 and think about which categories of X syntax the categories in Categorial Grammar correspond to.

# **Further reading**

Mark Steedman discusses a variant of Categorial Grammar, *Combinatory Categorial Grammar*, in a series of books and articles: Steedman (1991, 2000), Steedman & Baldridge (2006).

Lobin (2003) compares Categorial Grammar with Dependency Grammar and Pickering & Barry (1993) suggest a combination of Dependency Grammar and Categorial Grammar, which they call Dependency Categorial Grammar. Kubota (2021) compares Categorial Grammar with HPSG.

Briscoe (2000) and Villavicencio (2002) discuss UG-based acquisition models in the framework of Categorial Grammar.

# **9 Head-Driven Phrase Structure Grammar**

Head-Driven Phrase Structure Grammar (HPSG) was developed by Carl Pollard and Ivan Sag in the mid-80s in Stanford and in the Hewlett-Packard research laboratories in Palo Alto (Pollard & Sag 1987, 1994; see Flickinger et al. 2021 for more on the history of HPSG). Like LFG, HPSG is part of so-called West Coast linguistics. Another similarity to LFG is that HPSG aims to provide a theory of competence which is compatible with performance (Sag & Wasow 2011, 2015, Wasow 2021, see also Chapter 15).

The formal properties of the description language for HPSG grammars are well-understood and there are many systems for processing such grammars (Dörre & Seiffert 1991, Dörre & Dorna 1993, Popowich & Vogel 1991, Uszkoreit, Backofen, Busemann, Diagne, Hinkelman, Kasper, Kiefer, Krieger, Netter, Neumann, Oepen & Spackman 1994, Erbach 1995, Schütz 1996, Schmidt, Theofilidis, Rieder & Declerck 1996, Schmidt, Rieder & Theofilidis 1996, Uszkoreit, Backofen, Calder, Capstick, Dini, Dörre, Erbach, Estival, Manandhar, Mineur & Oepen 1996, Müller 1996c, 2004d, Carpenter & Penn 1996, Penn & Carpenter 1999, Götz, Meurers & Gerdemann 1997, Copestake 2002, Callmeier 2000, Dahllöf 2003, Meurers, Penn & Richter 2002, Penn 2004, Müller 2007d, Sato 2008, Kaufmann 2009, Slayden 2012, Packard 2015).<sup>1</sup> Currently, the LKB system by Ann Copestake and the TRALE system, that was developed by Gerald Penn (Meurers, Penn & Richter 2002, Penn 2004), have the most users. The DELPH-IN consortium – whose grammar fragments are based on the LKB – and various TRALE users have developed many small and some large grammar fragments of various languages. The following is a list of implementations in different systems:


<sup>1</sup>Uszkoreit et al. (1996) and Bolc et al. (1996) compare systems that were available or were developed at the beginning of the 1990s. Melnik (2007) compares LKB and TRALE. See also Müller (2015c: Section 5.1).

#### 9 Head-Driven Phrase Structure Grammar


The first implemented HPSG grammar was a grammar of English developed in the Hewlett-Packard labs in Palo Alto (Flickinger, Pollard & Wasow 1985, Flickinger 1987). Grammars for German were developed in Heidelberg, Stuttgart and Saarbrücken in the LILOG project. Subsequently, grammars for German, English and Japanese were developed in Heidelberg, Saarbrücken and Stanford in the Verb*mobil* project. Verb*mobil* was the largest ever AI project in Germany. It was a machine translation project for spoken language in the domains of trip planning and appointment scheduling (Wahlster 2000).

Currently there are two larger groups that are working on the development of grammars: the DELPH-IN consortium (Deep Linguistic Processing with HPSG)<sup>2</sup> and the group that developed out of the network CoGETI (Constraintbasierte Grammatik: Empirie, Theorie und Implementierung). Many of the grammar fragments that are listed above were developed by members of DELPH-IN and some were derived from the Grammar Matrix which was developed for the LKB to provide grammar writers with a typologically motivated initial grammar that corresponds to the properties of the language under development (Bender, Flickinger & Oepen 2002). The CoreGram project<sup>3</sup> is a similar project that was started at the Freie Universität Berlin and is now being run at the Humboldt-Universität zu Berlin. It is developing grammars for German, Danish, Persian, Maltese, Mandarin Chinese, Spanish, French, Welsh, and Yiddish that share a common core. Constraints that hold for all languages are represented in one place and used by all grammars. Furthermore, there are constraints that hold for certain language classes and, again, they are represented together and used by the respective grammars. So while the Grammar Matrix is used to derive grammars that individual grammar writers can use, adapt and modify to suit their needs, CoreGram really develops grammars for various languages that are used simultaneously and have to stay in sync. A description of the CoreGram can be found in Müller (2013b, 2015c).

There are systems that combine linguistically motivated analyses with statistics components (Brew 1995, Miyao et al. 2005, Miyao & Tsujii 2008) or learn grammars or lexica from corpora (Fouvry 2003, Cramer & Zhang 2009).

<sup>2</sup> http://www.delph-in.net/. 2018-02-20.

<sup>3</sup> https://hpsg.hu-berlin.de/Projects/CoreGram.html. 20.02.2023.

The following URLs point to pages on which grammars can be tested:


For further information on the interaction between HPSG and computational linguistics see Bender & Emerson (2021).

# **9.1 General remarks on the representational format**

HPSG has the following characteristics: it is a lexicon-based theory, that is, the majority of linguistic constraints are situated in the descriptions of words or roots. HPSG is sign-based in the sense of Saussure (1916): the form and meaning of linguistic signs are always represented together. Typed feature structures are used to model all relevant information.<sup>4</sup> Hence, HPSG belongs to the class of model-theoretic grammars (King 1999; see also Chapter 14 of this book). The feature structures can be described with feature descriptions such as in (1). Lexical entries, phrases and principles are always modeled and described with the same formal means. Generalizations about word classes or rule schemata are captured with inheritance hierarchies (see Section 6.2). Phonology, syntax and semantics are represented in a single structure. There are no separate levels of representation such as PF or LF in Government & Binding Theory. (1) shows an excerpt from the representation of a word such as *Grammatik* 'grammar'.

One can see that this feature description contains information about the phonology, syntactic category and semantic content of the word *Grammatik* 'grammar'. To keep things simple, the value of phonology (phon) is mostly given as an orthographic representation. In fully fleshed-out theories, the phon value is a complex structure that contains

<sup>4</sup> Readers who read this book non-sequentially and who are unfamiliar with typed feature descriptions and typed feature structures should consult Chapter 6 first.

information about metrical grids and weak or strong accents. See Bird & Klein (1994), Orgun (1996), Höhle (1999), Walther (1999), Crysmann (2002: Chapter 6), and Bildhauer (2008) for phonology in the framework of HPSG. The details of the description in (1) will be explained in the following sections.

HPSG has adopted various insights from other theories and newer analyses have been influenced by developments in other theoretical frameworks. Functor-argument structures, the treatment of valence information and function composition have been adopted from Categorial Grammar. Function composition plays an important role in the analysis of verbal complexes in languages like German and Korean. The Immediate Dominance/Linear Precedence format (ID/LP format, see Section 5.1.2) as well as the Slash mechanism for long-distance dependencies (see Section 5.4) both come from GPSG. The analysis assumed here for verb position in German is inspired by the one that was developed in the framework of Government & Binding (see Section 3.2). Starting in 1995, HPSG also incorporated insights from Construction Grammar (Sag 1997, see also Section 10.6.2 on Sign-Based Construction Grammar, which is a HPSG variant).

## **9.1.1 Representation of valence information**

The phrase structure grammars discussed in Chapter 2 have the disadvantage that one requires a great number of different rules for the various valence types. (2) shows some examples of this kind of rules and the corresponding verbs.


In order for the grammar not to create any incorrect sentences, one has to ensure that verbs are only used with appropriate rules.

	- b. \* dass that Peter Peter erwartet expects
	- c. \* dass that Peter Peter über about den the Mann man erwartet expects

Therefore, verbs (and heads in general) have to be divided into valence classes. These valence classes have to then be assigned to grammatical rules. One must therefore further specify the rule for transitive verbs in (2) as follows:

(4) S → NP[*nom*], NP[*acc*], V[*nom\_acc*]

Here, valence has been encoded twice. First, we have said something in the rules about what kind of elements can or must occur, and then we have stated in the lexicon which

valence class the verb belongs to. In Section 5.5, it was pointed out that morphological processes need to refer to valence information. Hence, it is desirable to remove redundant valence information from grammatical rules. For this reason, HPSG – like Categorial Grammar – includes descriptions of the arguments of a head in the lexical entry of that head. There are the features specifier (spr) and complements (comps), whose values are lists containing descriptions of the elements that must combine with a head in order to yield a complete phrase. (5) gives some examples for the verbs in (2):

(5) Valence lists for finite verbs:


The table and the following figures use comps as a valence feature. Former versions of HPSG (Pollard & Sag 1987) used the feature subcat instead, which stands for subcategorization. It is often said that a head subcategorizes for certain arguments. See page 91 for more on the term *subcategorization*. Depending on the language, subjects are treated differently from other arguments (see for example Chomsky (1981a: 26–28), Hoekstra (1987: 33)). The subject in SVO languages like English has properties that differ from those of objects. For example, the subject is said to be an extraction island. This is not the case for SOV languages like German and hence it is usually assumed that all arguments of finite verbs are treated alike (Pollard 1996; Eisenberg 1994a: 376). Therefore the subject is included in the lists above. I will return to English shortly.

Figure 9.1 shows the analysis for (6a) and the analysis for (6b) is in Figure 9.2 on the next page:

	- b. [dass] that Peter Peter Maria Maria erwartet expects 'that Peter expects Maria'

In Figures 9.1 and 9.2, one element of the comps list is combined with its head in each local tree. The elements that are combined with the selecting head are then no longer present in the comps list of the mother node. V[comps ⟨ ⟩] corresponds to a complete phrase (VP or S). The boxes with numbers show the structure sharing (see Section 6.4). Structure sharing is the most important means of expression in HPSG. It plays a central role for phenomena such as valence, agreement and long-distance dependencies. In the examples above, 1 indicates that the description in the comps list is identical to another daughter in the tree. The descriptions contained in valence lists are usually partial descriptions, that is, not all properties of the argument are exhaustively described. Therefore, it is possible that a verb such as *schläft* 'sleeps' can be combined with various kinds of linguistic

Figure 9.1: Analysis of *Peter schläft* 'Peter sleeps' in *dass Peter schläft* 'that Peter sleeps'

objects: the subject can be a pronoun, a proper name or a complex noun phrase, it only matters that the linguistic object in question is complete (has an empty spr list and an empty comps list) and bears the correct case.<sup>5</sup>

Figure 9.2: Analysis of *Peter Maria erwartet* 'Peter expects Maria.'

As mentioned above, researchers working on German usually assume that subjects and objects should be treated similarly since they do not differ in fundamental ways as they do in SVO languages like English. Hence, for German, all arguments are represented in the same list. However, for SVO languages it proved useful to assume a special valence feature for preverbal dependents (Borsley 1987; Pollard & Sag 1994: Chapter 9). The arguments can be split in subjects, that are represented in the spr list, and other arguments (complements), which are represented in the comps list. The English equivalent of our table for German is given as (7):


5 Furthermore, it must agree with the verb. This is not shown here. The analysis of (8) is shown in Figure 9.3.

(8) Kim talks about the summer.

Figure 9.3: Analysis of *Kim talks about the summer.*

A head is combined with all its complements first and then with its specifier.<sup>6</sup> So, *talks* is combined with *about the summer* and the resulting VP is combined with its subject *Kim*. The spr list works like the comps list: if an element is combined with its head, it is not contained in the spr list of the mother. Figure 9.3 also shows the analysis of an NP: nouns select a determiner via spr. The combination of *the* and *summer* is complete as far as specifiers are concerned and hence the spr list at the node for *the summer* is the empty list. Nominal elements with empty spr list and comps list will be abbreviated as NP. Similarly, fully saturated linguistic objects with Ps as heads are PPs.

(7) provides the spr and comps values of some example verbs. In addition it also provides the arg-st value. arg-st stands for *argument structure* and is a list of all arguments of a head. This argument structure list plays a crucial role in establishing the connection between syntax (valence) and semantics. The term for this is *linking*. We will deal with linking in more detail in Section 9.1.6.

<sup>6</sup> I present the analyses in a bottom-up way here, but it is very important that HPSG does not make any statements about the order in which linguistic objects are combined. This is crucial when it comes to psycholinguistic plausibility of linguistic theories. See Chapter 15 for discussion.

After this brief discussion of English constituent structure, I will turn to German again and ignore the spr feature. The value of spr in all the verbal structures that are discussed in the following is the empty list.

### **9.1.2 Representation of constituent structure**

As already noted, feature descriptions in HPSG serve as the sole descriptive inventory of morphological rules, lexical entries and syntactic rules. The trees we have seen thus far are only visualizations of the constituent structure and do not have any theoretical status. There are also no rewrite rules in HPSG.<sup>7</sup> The job of phrase structure rules is handled by feature descriptions. Information about dominance is represented using dtr features (head daughter and non-head daughter), information about precedence is implicitly contained in phon. (9) shows the representation of phon values in a feature description corresponding to the tree in Figure 9.4.

Figure 9.4: Analysis of *dem Mann* 'the man'


In (9), there is exactly one head daughter (head-dtr). The head daughter is always the daughter containing the head. In a structure with the daughters *das* 'the' and *Bild von Maria* 'picture of Maria', the latter would be the head daughter. In principle, there can be multiple non-head daughters. If we were to assume a flat structure for a sentence with a ditransitive verb, as in Figure 2.1 on page 54, we would have three non-head daughters. It also makes sense to assume binary branching structures without heads (see Müller 2007a: Chapter 11 for an analysis of relative clauses). In such structures we would also have more than one non-head daughter, namely exactly two.

Before it is shown how it is ensured that only those head-complement structures are licensed in which the argument matches the requirements of the head, I will present the

<sup>7</sup>However, phrase structure rules are used in some computer implementations of HPSG in order to improve the efficiency of processing.

#### 9 Head-Driven Phrase Structure Grammar

general structure of feature descriptions in HPSG. The structure presented at the start of this chapter is repeated in (10) with all the details relevant to the present discussion:

In the outer layer, there are the features phon and synsem. As previously mentioned, phon contains the phonological representation of a linguistic object. The value of synsem is a feature structure which contains syntactic and semantic information that can be selected by other heads. The daughters of phrasal signs are represented outside of synsem. This ensures that there is a certain degree of locality involved in selection: a head cannot access the internal structure of the elements which it selects (Pollard & Sag 1987: 143–145; 1994: 23). See also Sections 10.6.2.1 and 18.2 for a discussion of locality. Inside synsem, there is information relevant in local contexts (local, abbreviated to loc) as well as information important for long-distance dependencies (nonlocal or nonloc for short). Locally relevant information includes syntactic (category or cat), and semantic (content or cont) information. Syntactic information encompasses information that determines the central characteristics of a phrase, that is, the head information. This is represented under head. Further details of this will be discussed in Section 9.1.4. Among other things, the part of speech of a linguistic object belongs to the head properties of a phrase. As well as head, spr and comps belong to the information contained inside cat. The semantic content of a sign is present under cont. The type of the cont value is *mrs*, which stands for *Minimal Recursion Semantics* (Copestake, Flickinger, Pollard & Sag 2005). An MRS structure is comprised of an index and a list of relations (relations or rels) which restrict this index. Of the nonlocal features, only slash is given here. There are further features for dealing with relative and interrogative clauses (Pollard & Sag 1994; Sag 1997; Ginzburg & Sag 2000; Holler 2005), which will not be discussed here.

As can be seen, the description of the word *Grammatik* 'grammar' becomes relatively complicated. In theory, it would be possible to list all properties of a given object directly in a single list of feature-value pairs. This would, however, have the disadvantage that the identity of groups of feature-value pairs could not be expressed as easily. Using the feature geometry in (10), one can express the fact that the cat values of both conjuncts in symmetric coordinations such as those in (11) are identical.

	- b. Er he.nom [kennt] knows und and [liebt] loves diese this.acc Schallplatte. record
	- c. Er he ist is [dumm] dumb und and [arrogant]. arrogant

(11b) should be compared with the examples in (12). In (12a), the verbs select for an accusative and a dative object, respectively and in (12b), the verbs select for an accusative and a prepositional object:

	- b. \* weil because er he auf for Maria Maria kennt knows und and wartet waits Intended: 'because he knows Maria and waits for her'

While the English translation of (12a) is fine, since both *knows* and *helps* take an accusative, (12a) is out, since *kennt* 'knows' takes an accusative and *hilft* 'helps' a dative object. Similarly, (12b) is out since *kennt* 'knows' selects an accusative object and *wartet* 'waits' selects for a prepositional phrase containing the preposition *auf* 'for'.

If valence and the part of speech information were not represented in one common sub-structure, we would have to state separately that utterances such as (11) require that both conjuncts have the same valence and part of speech.

After this general introduction of the feature geometry that is assumed here, we can now turn to the Head-Complement Schema:

> 

### **Schema 1 (Head-Complement Schema (binary branching, preliminary version))**

```
head-complement-phrase ⇒

 synsem|loc|cat|comps 1
 head-dtr|synsem|loc|cat|comps 1 ⊕ ⟨ 2 ⟩
 non-head-dtrs ⟨ [ synsem 2 ] ⟩
```
Schema 1 states the properties a linguistic object of the type *head-complement-phrase* must have. (For more on types see Section 9.1.5.) The arrow in Schema 1 stands for a logical implication and not for the arrow of rewrite rules as we know it from phrase structure grammars. '⊕' (*append*) is a relation which combines two lists. (13) shows possible splits of a list that contains two elements:

$$\begin{array}{ll} \text{(13)} \quad \langle \, \langle \, \mathbf{x}, \, \mathbf{y} \rangle = \langle \, \mathbf{x} \rangle \oplus \langle \, \mathbf{y} \rangle \text{ or} \\ \qquad \qquad \langle \rangle \oplus \langle \, \mathbf{x}, \mathbf{y} \rangle \text{ or} \\ \qquad \qquad \langle \, \mathbf{x}, \mathbf{y} \rangle \oplus \langle \rangle \end{array}$$

The list ⟨ *x, y* ⟩ can be subdivided into two lists each containing one element, or alternatively into the empty list and ⟨ *x, y* ⟩.

Schema 1 can be read as follows: if an object is of the type *head-complement-phrase* then it must have the properties on the right-hand side of the implication. In concrete terms, this means that these objects always have a valence list which corresponds to 1 , that they have a head daughter with a valence list that can be divided into two sublists 1 and ⟨ 2 ⟩ and also that they have a non-head daughter whose syntactic and semantic properties (synsem value) are compatible with the last element of the comps list of the head daughter ( 2 ). (14) provides the corresponding feature description for the example in (6a).

$$\begin{aligned} & \begin{bmatrix} \textit{head-complement-phrase} \\ \textit{PHON} \left< \textit{Peter} \, \mathit{schläft} \right> \\ \textit{SYNSEM}|\textit{LOC}|\textit{CAT}|\textit{comps} \; \langle \rangle \\ \textit{HEAD-DTR} \left[ \begin{subarray} \textit{PHON} \left< \mathit{schläft} \right> \\ \textit{SYNSEM}|\textit{LOC}|\textit{CAT}|\textit{comps} \; \langle \sqsubseteq\mathit{NP}[\textit{nom}] \; \rangle \end{subarray} \right] \end{bmatrix} \end{aligned} $$

NP[*nom*] is an abbreviation for a complex feature description. Schema 1 divides the comps list of the head daughter into a single-element list and what is left. Since *schläft* 'sleeps' only has one element in its comps list, what remains is the empty list. This remainder is also the comps value of the mother.

### **9.1.3 Linearization rules**

Dominance schemata do not say anything about the order of the daughters. As in GPSG, linearization rules are specified separately. Linearization rules can make reference to the properties of daughters, their function in a schema (head, complement, adjunct, …) or both. If we assume a feature initial for all heads, then heads which precede their complements would have the initial value '+' and heads following their complements would have the value '–'. The linearization rules in (15) ensure that ungrammatical orders such as (16b,d) are ruled out.<sup>8</sup>

	- b. Complement < Head[initial–]

<sup>8</sup>Noun phrases pose a problem for (15): determiners have been treated as argument until now and were included in the comps list of the head noun. Determiners occur to the left of noun, whereas all other arguments of the noun are to the right. This problem can be solved either by refining linearization rules (Müller 1999b: 164–165) or by introducing a special valence feature for determiners (Pollard & Sag 1994: Section 9.4). For an approach using such a feature, see Section 9.1.1.

Prepositions have an initial value '+' and therefore have to precede arguments. Verbs in final position bear the value '−' and have to follow their arguments.

	- b. \* [[den the Schrank] cupboard in] in
	- c. dass that [er he [ihn it umfüllt]] decants
	- d. \* dass that [er he [umfüllt decants ihn]] it

# **9.1.4 Projection of head properties**

As was explained in Section 1.5, certain properties of heads are important for the distribution of the whole phrase. For instance, the verb form belongs to the features that are important for the distribution of verbal projections. Certain verbs require a verbal argument with a particular form:

	- b. [Dem the Mann man geholfen] helped hat has er he nicht. not 'He hasn't helped the man.'
	- c. \* [Dem the Mann man geholfen] helped will wants er he nicht. not
	- d. \* [Dem the Mann man helfen] help hat has er he nicht. not

*wollen* 'to want' always requires an infinitive without *zu* 'to', while *haben* 'have' on the other hand requires a verb in participle form. *glauben* 'believe' can occur with a finite clause, but not with an infinitive without *zu*:

(18) a. Ich glaube, Peter kommt morgen.

I

believe Peter comes tomorrow

'I think Peter is coming tomorrow.'

	- I believe tomorrow come

This shows that projections of verbs must not only contain information about the part of speech but also information about the verb form. Figure 9.5 on the next page shows this on the basis of the finite verb *gibt* 'gives'.

Figure 9.5: Projection of the head features of the verb

GPSG has the Head Feature Convention that ensures that head features on the mother node are identical to those on the node of the head daughter. In HPSG, there is a similar principle. Unlike GPSG, head features are explicitly contained as a group of features in the feature structures. They are listed under the path synsem|loc|cat|head. (19) shows the lexical item for *gibt* 'gives':

(19) *gibt* 'gives': *word* phon *gibt* synsem|loc|cat head *verb* vform *fin* comps ⟨ NP[*nom*], NP[*dat*], NP[*acc*] ⟩ 

The *Head Feature Principle* takes the following form:

### **Principle 1 (***Head Feature Principle***)**

The head value of any headed phrase is structure-shared with the head value of thehead daughter.

Figure 9.6 on the facing page is a variant of Figure 9.5 with the structure sharing made explicit.

The following section will deal with how this principle is formalized as well as how it can be integrated into the architecture of HPSG.

Figure 9.6: Projection of head features of a verb with structure sharing

## **9.1.5 Inheritance hierarchies and generalizations**

Up to now, we have seen one example of a dominance schema and more will follow in the coming sections, e.g., schemata for head-adjunct structures as well as for the binding off of long-distance dependencies. The Head Feature Principle is a general principle which must be met by all structures licensed by these schemata. As mentioned above, it must be met by all structures with a head. Formally, this can be captured by categorizing syntactic structures into those with and those without heads and assigning the type *headed-phrase* to those with a head. The type *head-complement-phrase* – the type which the description in Schema 1 on page 277 has – is a subtype of *headed-phrase*. Objects of a certain type x always have all properties that objects have that are supertypes of x. Recall the example from Section 6.2: an object of the type *woman* has all the properties of the type *person*. Furthermore, objects of type *woman* have additional, more specific properties not shared by other subtypes of *person*.

If one formulates a restriction on a supertype, this automatically affects all of its subtypes. The Head Feature Principle hence can be formalized as follows:

$$\text{(20)}\quad headed\text{-}phase\Rightarrow \begin{bmatrix} \text{sYNSEM}|\text{LOC}|\text{CAT}|\text{HEAD} \,\text{[}\square\text{]}\\ \text{HEAD}\text{-}\text{DTR}|\text{SYNSEM}|\text{LOC}|\text{CAT}|\text{HEAD} \,\text{[}\square\text{]} \end{bmatrix}$$

Figure 9.7: Type hierarchy for *sign*: all subtypes of *headed-phrase* inherit constraints

The arrow corresponds to a logical implication, as mentioned above. Therefore, (20) can be read as follows: if a structure is of type *headed-phrase*, then it must hold that the value of synsem|loc|cat|head is identical to the value of head-dtr|synsem|loc|cat|head.

An extract of the type hierarchy under *sign* is given in Figure 9.7. *word* and *phrase* are subclasses of linguistic signs. Phrases can be divided into phrases with heads (*headedphrase*) and those without (*non-headed-phrase*). There are also subtypes for phrases of type *non-headed-phrase* and *headed-phrase*. We have already discussed *head-complementphrase*, and other subtypes of *headed-phrase* will be discussed in the later sections. As well as *word* and *phrase*, there are the types *root* and *stem*, which play an important role for the structure of the lexicon and the morphological component. Due to space considerations, it is not possible to further discuss these types here, but see Chapter 23.

The description in (21) shows the Head-Complement Schema from page 277 together with the restrictions that the type *head-complement-phrase* inherits from *headed-phrase*.

(21) Head-Complement Schema + Head Feature Principle:

$$\begin{array}{|l|l|l|}
\hline
\text{head-complement-phrase} \\
\text{SYNSEM}|\text{LOC}|\text{CAT} & \begin{bmatrix} \text{HEAD} & \box{\Box} \\ \text{COMPS} & \box{\Box} \end{bmatrix} \\
\hline
\end{array}
$$
 
$$\begin{array}{|l|l|}
\hline
\text{HEAD-DTR}|\text{SYNSEM}|\text{LOC}|\text{CAT} & \begin{bmatrix} \text{HEAD} & \box{\Box} \\ \text{COMPS} & \box{\Box} \oplus \left< \box{\Box} \end{bmatrix} \\
\hline
\end{array}
$$
  $\begin{array}{|l|l|}
\hline
\text{NON-HEAD-DTR} \left\langle \begin{bmatrix} \text{SYNSEM} \left\box{\Box} \end{bmatrix} \right\rangle \\
\hline
\end{array}$ 

(22) gives a description of a structure licensed by Schema 1. As well as valence information, the head information is specified in (22) and it is also apparent how the Head Feature Principle ensures the projection of features: the head value of the entire structure ( 1 ) corresponds to the head value of the verb *gibt* 'gives'.

 

$$
\begin{array}{c}
\begin{bmatrix}
head{-}end{bmatrix} \begin{bmatrix}
head{-}complement-phase\\ \mathsf{PHON} \left( \begin{array}{c} \text{Box} & \mathsf{H} \end{array} \right) \\
\qquad
\text{SYNSE} \left[ \begin{array}{c} \text{HEAD} \begin{array}{c} \Pi \\ \text{COMPS} \left[ \begin{array}{c} \Pi \\ \text{COMPS} \left[ \begin{array}{c} \Pi \end{array} \right] \end{array} \right] \end{array} \right] \\
\begin{array}{c} \text{H} \text{EAD-DTR} \\
\end{array} \left[ \begin{array}{c} \text{Word} \\ \text{PHON} \left( \begin{array}{c} \text{j} \mathit{j} \mathit{j} \mathit{b} \,\mathl \mathit{e} \end{array} \right) \\
\text{H} \text{EAD-DTR} \\
\end{array} \left[ \begin{array}{c} \text{H} \text{SEM} \left[ \begin{array}{c} \Pi \text{EAD} \\ \text{COMPS} \left[ \begin{array}{c} \Pi \end{array} \Pi \end{array} \right] \begin{array}{c} \text{W} \text{O} \,\mathl \mathit{e} \\
\text{COMPS} \left[ \begin{array}{c} \Box \end{array} \Pi \end{array} \right] \end{array} \right] \right] \\
\begin{array}{c} \text{H} \text{ON-}\text{H} \text{EAD}-\text{DTR} \left( \begin{array}{c} \text{H} \text{O} \,\text{b} \,\text{b} \\\\ \text{SNN-}\text{H} \text{SEM} \left( \begin{array}{c} \Box \end{array} \text{ICC} \left[ \begin{array}{c} \text{H} \text{EAD} \\ \text{CAS} \,\text{acc} \\ \text{COMPS} \left( \begin{array}{c} \text{H} \text{EAD} \\ \text{CAS} \$$

For the entire clause *er das Buch dem Mann gibt* 'he gives the book to the man', we arrive at a structure (already shown in Figure 9.6) described by (23):

$$\begin{array}{c} \begin{bmatrix} \\ \\ \\ \\ \\ \\ \end{bmatrix} \end{array} \left| \begin{array}{c} \begin{bmatrix} \\ \begin{bmatrix} \\ \\ \\ \\ \\ \end{bmatrix} \end{bmatrix} \begin{bmatrix} \text{vec} \\ \begin{bmatrix} \\ \\ \\ \\ \\ \end{bmatrix} \end{bmatrix} \begin{bmatrix} \\ \\ \\ \\ \\ \end{bmatrix} \right| \right| \end{array} \right] \quad \text{( $\mathbf{Y}$ - $D$ )}$$

This description corresponds to the sentence symbol S in the phrase structure grammar on page 53, however (23) additionally contains information about the form of the verb.

Using dominance schemata as an example, we have shown how generalizations about linguistic objects can be captured, however, we also want to be able to capture generalizations in other areas of the theory: like Categorial Grammar, the HPSG lexicon contains a very large amount of information. Lexical entries (roots and words) can also be divided into classes, which can then be assigned types. In this way, it is possible to capture what all verbs, intransitive verbs and transitive verbs, have in common. See Figure 23.1 on page 692.

Now that some fundamental concepts of HPSG have been introduced, the following section will show how the semantic contribution of words is represented and how the meaning of a phrase can be determined compositionally.

### **9.1.6 Semantics**

An important difference between theories such as GB, LFG and TAG, on the one hand, and HPSG and CxG, on the other, is that the semantic content of a linguistic object is modeled in a feature structure just like all its other properties. As previously mentioned, semantic information is found under the path synsem|loc|cont. (24) gives an example of the cont value for *Buch* 'book'. The representation is based on Minimal Recursion Semantics (MRS):<sup>9</sup>

$$\begin{array}{c} \begin{bmatrix} mrs\\ \\ \end{bmatrix} \\\\ \begin{bmatrix} 24 \\\\ \end{bmatrix} \end{array} \begin{bmatrix} \begin{bmatrix} \text{PER} & 3\\ \text{NUM} & \text{sg} \\\\ \text{GEN} & \text{neu} \end{bmatrix} \Bigg| \\\\ \begin{bmatrix} \begin{bmatrix} \text{buck} & \\ \text{INST} & \text{\Box} \end{bmatrix} \end{bmatrix} \end{array} \right] \tag{24}$$

ind stands for index and rels is a list of relations. Features such as person, number and gender are part of a nominal index. These are important in determining reference or coreference. For example, *sie* 'she' in (25) can refer to *Frau* 'woman' but not to *Buch* 'book'. On the other hand, *es* 'it' cannot refer to *Frau* 'woman'.

(25) Die the Frau woman kauft buys ein a Buch . book Sie she liest reads es . it 'The woman buys a book. She reads it.'

In general, pronouns have to agree in person, number and gender with the element they refer to. Indices are then identified accordingly. In HPSG, this is done by means of structure sharing. It is also common to speak of *coindexation*. (26) provides some examples of coindexation of reflexive pronouns:

	- b. Du you siehst see dich . yourself
	- c. Er he sieht sees sich . himself
	- d. Wir we sehen see uns . ourselves
	- e. Ihr you seht see euch . yourselves

<sup>9</sup> Pollard & Sag (1994) and Ginzburg & Sag (2000) make use of Situation Semantics (Barwise & Perry 1983, Cooper, Mukai & Perry 1990, Devlin 1992). An alternative approach which has already been used in HPSG is Lexical Resource Semantics (Richter & Sailer 2004). For an early underspecification analysis in HPSG, see Nerbonne (1993).

f. Sie they sehen see sich . themselves

The question of which instances of coindexation are possible and which are necessary is determined by Binding Theory. Pollard & Sag (1992, 1994) have shown that Binding Theory in HPSG does not have many of the problems that arise when implementing binding in GB with reference to tree configurations. There are, however, a number of open questions for Binding Theory in HPSG (Müller 1999b: Section 20.4).

(27) shows the cont value for the verb *geben* 'give':


It is assumed that verbs have an event variable of the type *event*, which is represented under ind just as with indices for nominal objects. Until now, we did not assign elements in the valence list to argument roles in the semantic representation. This connection is referred to as *linking*. (28) shows how linking works in HPSG. The referential indices of the argument noun phrases are structure-shared with one of the semantic roles of the relation contributed by the head.

(28) Lexical entry for the lexeme *geben* 'give':

```

cont

          mrs
          ind 4 event
          rels *

                   geben
                   event 4
                   agent 1
                   goal 2
                   theme 3

                               +

arg-st D
           NP[nom] 1
                      , NP[dat] 2
                                 , NP[acc] 3
                                             E
```
The list that contains all the arguments of a head is called *argument structure list* and it is represented as value of the arg-st feature. This list plays a very important role in HPSG grammars: case is assigned there, the Binding Theory operates on arg-st and the linking between syntax and semantics takes place on arg-st as well.

For finite verbs, the value of arg-st is identical to the value of comps in German. As was explained in Section 9.1.1, the first element of the arg-st list is the subject in languages like English and it is represented in the spr list (Sag, Wasow & Bender 2003: Section 4.3, 7.3.1; Müller 2023b). All other elements from arg-st are contained in the comps list. So there are language specific ways to represent the valence, but there is one

#### 9 Head-Driven Phrase Structure Grammar

common representation that is the same for all argument structure representations. This makes it possible to capture cross-linguistic generalizations.

Since we use general terms such as agent and patient for argument roles, it is possible to state generalizations about valence classes and the realization of argument roles. For example, one can divide verbs into verbs taking an agent, verbs with an agent and theme, verbs with agent and patient etc. These various valence/linking patterns can be represented in type hierarchies and these classes can be assigned to the specific lexical entries, that is, one can have them inherit constraints from the respective types. A type constraint for verbs with agent, theme and goal takes the form of (29):

$$\text{(29)}\quad \begin{bmatrix} \text{ } & \begin{bmatrix} \text{mrs} \\ \text{IND} & \boxed{\Box} \text{ event} \\ & \begin{bmatrix} \text{agent-goal-theme-rel} \\ \text{RES} \left< \begin{array}{c} \text{EVENT} & \boxed{\Box} \\ \text{AGENT} & \boxed{\Box} \\ \text{GAGA} & \boxed{\Box} \end{bmatrix} \end{bmatrix} \end{bmatrix} \end{bmatrix} \end{bmatrix} \end{bmatrix}$$

$$\begin{bmatrix} \text{ARG-ST} \left\{ \begin{bmatrix} \text{I} \\ \text{I} \end{bmatrix}, \begin{bmatrix} \text{I} \\ \text{LE} \end{bmatrix}, \begin{bmatrix} \text{I} \\ \text{LE} \end{bmatrix} \end{bmatrix} \end{bmatrix}$$

[] 1 stands for an object of unspecified syntactic category with the index 1 . The type for the relation *geben*′ is a subtype of *agent-goal-theme-rel*. The lexical entry for the word *geben* 'give' or rather the root *geb*- has the linking pattern in (29). For more on theories of linking in HPSG, see Davis (1996), Wechsler (1995) and Davis & Koenig (2000). Davis et al. (2021) provide an overview of approaches to linking within HPSG.

Up to now, we have only seen how the meaning of lexical entries can be represented. The Semantics Principle determines the computation of the semantic contribution of phrases: the index of the entire expression corresponds to the index of the head daughter, and the rels value of the entire sign corresponds to the concatenation of the rels values of the daughters plus any relations introduced by the dominance schema. The last point is important because the assumption that schemata can add something to meaning can capture the fact that there are some cases where the entire meaning of a phrase is more than simply the sum of its parts. Pertinent examples are often discussed as part of Construction Grammar. Semantic composition in HPSG is organized such that meaning components that are due to certain patterns can be integrated into the complete meaning of an utterance. For examples, see Section 21.10.

The connection between the semantic contribution of the verb and its arguments is established in the lexical entry. As such, we ensure that the argument roles of the verb are assigned to the correct argument in the sentence. This is, however, not the only thing that the semantics is responsible for. It has to be able to generate the various readings associated with quantifier scope ambiguities (see page 90) as well as deal with semantic embedding of predicates under other predicates. All these requirements are fulfilled by MRS. Due to space considerations, we cannot go into detail here. The reader is referred to the article by Copestake, Flickinger, Pollard & Sag (2005) and to Section 19.3 in the discussion chapter.

### **9.1.7 Adjuncts**

Analogous to the selection of arguments by heads via comps, adjuncts can also select their heads using a feature (modified). Adjectives, prepositional phrases that modify nouns, and relative clauses select an almost complete nominal projection, that is, a noun that only still needs to be combined with a determiner to yield a complete NP. (30) shows a description of the respective *synsem* object. The symbol N, which is familiar from X theory (see Section 2.5), is used as abbreviation for this feature description.

(30) AVM that is abbreviated as N: cat head *noun* spr ⟨ Det ⟩ comps ⟨⟩ 

(31) shows part of the lexical item for *interessantes* 'interesting':<sup>10</sup>

(31) cat value for *interessantes* 'interesting':


*interessantes* is an adjective that does not take any arguments and therefore has an empty comps list. Adjectives such as *treu* 'loyal' have a dative NP in their comps list.

(32) ein a dem the.dat König king treues loyal Mädchen girl 'a girl loyal to the king'

The cat value is given in (33):

(33) cat value for *treues* 'loyal': head " *adj* mod N # comps ⟨ NP[*dat*] ⟩ 

*dem König treues* 'loyal to the king' forms an adjective phrase, which modifies *Mädchen*.

Unlike the selectional feature comps that belongs to the features under cat, mod is a head feature. The reason for this is that the feature that selects the modifying head has to be present on the maximal projection of the adjunct. The N-modifying property of the adjective phrase *dem König treues* 'loyal to the king' has to be included in the representation of the entire AP just as it is present in the lexical entry for adjectives in (31) at the lexical level. The adjectival phrase *dem König treues* has the same syntactic properties as the basic adjective *interessantes* 'interesting':

<sup>10</sup>In what follows, I am also omitting the spr feature, whose value would be the empty list.

$$\begin{array}{c} \text{(34)} \quad \text{car value für } dem\text{ }K\ddot{o}ng\text{ }treules\text{ }\text{'loyal to the king':}\\ \begin{bmatrix} \text{HEAD} & \begin{bmatrix} adj & \\ \text{MOD } \overline{N} \end{bmatrix} \\ \text{[comps ()]} \end{bmatrix} \end{array}$$

Since mod is a head feature, the Head Feature Principle (see page 280) ensures that the mod value of the entire projection is identical to the mod value of the lexical entry for *treues* 'loyal'.

As an alternative to the selection of the head by the modifier, one could assume a description of all possible adjuncts on the head itself. This was suggested by Pollard & Sag (1987: 161). Pollard & Sag (1994: Section 1.9) revised the earlier analysis since the semantics of modification could not be captured.<sup>11</sup>

Figure 9.8 demonstrates selection in head-adjunct structures.

Figure 9.8: Head-adjunct structure (selection)

Head-adjunct structures are licensed by the Schema 2.

```
Schema 2 (Head-Adjunct Schema)
```
The value of the selectional feature on the adjunct ( 1 ) is identified with the synsem value of the head daughter, thereby ensuring that the head daughter has the properties specified by the adjunct. The comps value of the non-head daughter is the empty list, which is why only completely saturated adjuncts are allowed in head-adjunct structures. Phrases such as (35b) are therefore correctly ruled out:

<sup>11</sup>See Bouma, Malouf & Sag (2001), however. The authors pursue a hybrid analysis where there are adjuncts which select heads and also adjuncts that are selected by a head. Minimal Recursion Semantics is the semantic theory underlying this analysis. Using this semantics, the problems arising for Pollard & Sag (1987) with regard to the semantics of modifiers are avoided.

(35) a. die the Wurst sausage in in der the Speisekammer pantry

> b. \* die the Wurst sausage in in

Example (35a) requires some further explanation. The preposition *in* (as used in (35a)) has the following cat value:

(36) cat value of *in*:

$$
\begin{bmatrix}
\begin{bmatrix}
\text{HEAD} & \begin{bmatrix}
\text{prep} \\
\text{MOD} 
\overline{\text{N}}
\end{bmatrix}
\end{bmatrix}
\end{bmatrix}
$$
 
$$
\begin{bmatrix}
\text{COMP}[data]
\end{bmatrix}
$$

After combining *in* with the nominal phrase *der Speisekammer* 'the pantry' one gets:

(37) cat value for *in der Speisekammer* 'in the pantry':

 head " *prep* mod N # comps ⟨ ⟩ 

This representation corresponds to that of the adjective *interessantes* 'interesting' and can – ignoring the position of the PP – also be used in the same way: the PP modifies a N.

Heads that can only be used as arguments but do not modify anything have a mod value of *none*. They can therefore not occur in the position of the non-head daughter in head-adjunct structures since the mod value of the non-head daughter has to be compatible with the synsem value of the head daughter.

# **9.2 Passive**

HPSG follows Bresnan's argumentation (see Section 7.2) and takes care of the passive in the lexicon.<sup>12</sup> A lexical rule takes the verb stem as its input and licenses the participle form and the most prominent argument (the so-called designated argument) is suppressed.<sup>13</sup> Since grammatical functions are not part of theory in HPSG, we do not require any mapping principles that map objects to subjects. Nevertheless, one still has to

<sup>12</sup>Some exceptions to this are analyses influenced by Construction Grammar such as Tseng (2007) and Haugereid (2007). These approaches are problematic, however, as they cannot account for Bresnan's adjectival passives. For other problems with Haugereid's analysis, see Müller (2007b) and Section 21.3.6.

<sup>13</sup>For more on the designated argument, see Haider (1986a). HPSG analyses of the passive in German have been considerably influenced by Haider. Haider uses the designated argument to model the difference between so-called unaccusative and unergative verbs (Perlmutter 1978): unaccusative verbs differ from unergatives and transitives in that they do not have a designated argument. We cannot go into the literature on unaccusativity here. The reader is referred to the original works by Haider and the chapter on the passive in Müller (2007a).

explain the change of case under passivization. The following two subsections introduce passive lexical rules and show how passive can be accounted for by explicitly mapping the accusative to the nominative (Section 9.2.1) and how this analysis can be improved so that accusatives do not have to be mentioned and the analysis accounts for impersonal passives as well (Section 9.2.2).

### **9.2.1 Passive as a lexical rule**

If one fully specifies the case of a particular argument in the lexical entries, one has to ensure that the accusative argument of a transitive verb is realized as nominative in the passive. (38) shows what the respective lexical rule would look like:

(38) Lexical rule for personal passives adapted from Kiss (1992):

$$\begin{aligned} \left[ \begin{array}{l} \text{term} \\ \text{^{\text{PHON}}[\square]} \\ \text{^{\text{SYNEM}[\square]} \text{LOC}[\text{CAT}[\text{HEAD over} b} \\ \text{^{\text{ARG-ST}} \left\{ \text{NP[nom]}, \text{NP[acc]}\_{\square} \right\} \oplus \textsf{^{\Box}\square} \end{array} \right] \right] \rightharpoonup \\ \left[ \begin{array}{l} \text{word} \\ \text{^{\text{PON}}f \left( \left[ \square \right] \right)} \\ \text{^{\text{SYNEM}[\square]} \text{LOC}[\text{CAT}[\text{HEAD}[\text{VFORM}[\text{passive-part}]]]} \\ \text{^{\text{ARG-ST}} \left\{ \text{NP[nom]}\_{\square} \right\} \oplus \textsf{^{\Box}\square} \end{array} \right] \end{aligned} $$

 This lexical rule takes a verb stem<sup>14</sup> as its input, which requires a nominative argument, an accusative argument and possibly further arguments (if 3 is not the empty list) and licenses a lexical entry that requires a nominative argument and possibly the arguments in 3 . <sup>15</sup> The output of the lexical rule specifies the vform value of the output word. This is important as the auxiliary and the main verb must go together. For example, it is not possible to use the perfect participle instead of the passive participle since these differ in their valence in Kiss' approach:

<sup>14</sup>The term *stem* includes roots (*helf* - 'help-'), products of derivation (*besing*- 'to sing about') and compounds. The lexical rule can therefore also be applied to stems like *helf* - and derived forms such as *besing*-.

<sup>15</sup>This rule assumes that arguments of ditransitive verbs are in the order nominative–accusative–dative. Throughout this chapter, I assume a nominative, dative, accusative order, which corresponds to the unmarked order of arguments in the German clause. Kiss (2001) argued that a representation of the unmarked order is needed to account for scope facts in German. Furthermore, the order of the arguments corresponds to the order one would assume for English, which has the advantage that cross-linguistic generalizations can be captured. In earlier work I assumed that the order is nominative–accusative–dative since this order encodes a prominence hierarchy that is relevant in a lot of areas in German grammar. Examples are: ellipsis (Klein 1985), Topic Drop (Fries 1988), free relatives (Bausewein 1991, Pittner 1995, Müller 1999a), depictive secondary predicates (Müller 2004b, 2002a, 2008), Binding Theory (Grewendorf 1985; Pollard & Sag: 1992; 1994: Chapter 6). This order also corresponds to the Obliqueness Hierarchy suggested by Keenan & Comrie (1977) and Pullum (1977). In order to capture this hierarchy, a special list with nominative–accusative–dative order would have to be assumed.

The version of the passive lexical rule that will be suggested below is compatible with both orders of arguments.

	- b. \* Der the Mann man wird aux den the Weltmeister world.champion geschlagen. beaten
	- c. Der the Weltmeister world.champion wird aux geschlagen. beaten 'The world champion is (being) beaten.'

There are a few conventions for the interpretation of lexical rules: all information that is not mentioned in the output sign is taken over from the input sign. Thus, the meaning of the verb is not mentioned in the passive rule, which makes sense as the passive rule is a meaning preserving rule. The cont values of the input and output are not mentioned in the rule and hence are identical. It is important here that the linking information is retained. As an example consider the application of the rule to the verb stem *schlag*- 'beat':

$$\begin{aligned} \text{(40)} \quad \text{a. Input} \begin{aligned} \text{(40)} \quad &\text{Input } schlag-\text{'}\text{beat'}\\ &\begin{bmatrix} \text{PHON} \left< \text{schlag} \right>\\ \text{FCM} \left< \text{schlag} \right>\\ \text{SYNEM}|\text{LOC} \left[ \begin{bmatrix} \text{IND} & \text{[E]} \text{event} \\ \text{CONT} & \begin{bmatrix} \text{EUN} & \text{[E]}\\ \text{RETS} \left< \begin{bmatrix} \text{EUN} & \text{[E]}\\ \text{FENT} & \text{[E]}\\ \text{FATEN} & \text{[E]} \end{bmatrix} \right> \end{aligned} \right] \end{aligned} \\ \begin{aligned} \text{(40)} \quad &\text{(41)} \quad \text{[E]} \left< \text{IND} \left< \text{Sch} \left< \text{Sch} \right> \right> \\ \text{[RHS} \left< \text{jess} \right> & \text{[E]} \left< \text{Sch} \right> \end{aligned} \right> \\ &\begin{aligned} \text{(41)} \quad &\text{(42)} \quad \text{[E]} \left< \text{Sch} \left< \text{Sch} \left< \text{Sch} \right> \right> \\ &\begin{bmatrix} \text{RHS} & \text{[E]} \text{vec} \\ \text{RHS} & \text{[E]} \left< \text{Sch} \right> \end{bmatrix} \right> \\ &\begin{bmatrix} \text{IND} & \text{[E]} \text{vec} \\ \text{RHS} & \text{[E]} \left< \begin{bmatrix} \text{Sch} \left< \text{Sch} \text{Sch} \right>\\ \text{RHS} & \text{[E]} \left< \text{Sch} \right> \end{bmatrix} \right> \\ &\begin{bmatrix} \text{RHS} \left< \text{Sch}$$

The agent role is connected to the subject of *schlag*-. After passivization, the subject is suppressed and the argument connected to the patient role of *schlag*- becomes the

#### 9 Head-Driven Phrase Structure Grammar

subject of the participle. Argument linking is not affected by this and thus the nominative argument is correctly assigned to the patient role.

As Meurers (2001) has shown, lexical rules can also be captured with feature descriptions. (41) shows the feature description representation of (38).

$$\begin{aligned} & \begin{bmatrix} \text{acc-passive-lexical-rule} \\ \text{PHON } f(\boxed{\Box}) \\ \text{SYNSEM}[\text{LOC}[\text{CAT}[\text{HEAD}[\text{VFORM}[\text{passive-part}] \\ \text{ARG-ST}\left\langle \text{NP}[nom]\_{\boxed} \right\rangle \oplus \boxed{\Box} \end{bmatrix} \\ \text{(41)} & \begin{bmatrix} \text{stem} \\ \text{PHON } \boxed{\Box} \\ \text{SYNSEM}[\text{LOC}[\text{CAT}[\text{HEAD}[\text{PER}[\text{READ}[\text{user}]] \\ \text{ARG-ST}\left\langle \text{NP}[nom], \text{NP}[acc]\_{\boxed} \right\rangle \oplus \boxed{\Box} \end{bmatrix} \end{aligned} \end{aligned}$$

What is on the left-hand side of the rule in (38), is contained in the value of lex-dtr in (41). Since this kind of lexical rule is fully integrated into the formalism, feature structures corresponding to these lexical rules also have their own type. If the result of the application of a given rule is an inflected word, then the type of the lexical rule (*accpassive-lexical-rule* in our example) is a subtype of *word*. Since lexical rules have a type, it is possible to state generalizations over lexical rules.

The lexical rules discussed thus far work well for the personal passive. For the impersonal passive, however, we would require a second lexical rule. Furthermore, we would have two different lexical items for the passive and the perfect, although the forms are always identical in German. In the following, I will discuss the basic assumptions that are needed for a theory of the passive that can sufficiently explain both personal and impersonal passives and thereby only require one lexical item for the participle form.

### **9.2.2 Valence information and the Case Principle**

In Section 3.4.1, the difference between structural and lexical case was motivated. In the HPSG literature, it is assumed following Haider (1986a) that the dative is a lexical case. For arguments marked with a lexical case, their case value is directly specified in the description of the argument. Arguments with structural case are also specified in the lexicon as taking structural case, but the actual case value is not provided. In order for the grammar not to make any false predictions, it has to be ensured that the structural cases receive a unique value dependent on their environment. This is handled by the Case Principle:<sup>16</sup>

<sup>16</sup>The Case Principle has been simplified here. Cases of so-called 'raising' require special treatment. For more details, see Meurers (1999c), Przepiórkowski (1999a) and Müller (2007a: Chapter 14, Chapter 17). The Case Principle given in these publications is very similar to the one proposed by Yip, Maling & Jackendoff (1987) and can therefore also explain the case systems of the languages discussed in their work, notably the complicated case system of Icelandic.

#### **Principle 2 (Case Principle (simplified))**


(42) shows prototypical valence lists for finite verbs:

(42) a. *schläft* 'sleeps': arg-st NP[*str*] b. *unterstützt* 'supports': arg-st NP[*str*] , NP[*str*] c. *hilft* 'helps': arg-st NP[*str*] , NP[*ldat*] d. *schenkt* 'gives': arg-st NP[*str*] , NP[*ldat*] , NP[*str*] 

*str* stands for *structural* and *ldat* for lexical dative. The Case Principle ensures that the subjects of the verbs listed above have to be realized in the nominative and also that objects with structural case are assigned accusative.

With the difference between structural and lexical case, it is possible to formulate a passive-lexical rule that can account for both the personal and the impersonal passive:

(43) Lexical rule for personal and impersonal passive (simplified):


This lexical rule does exactly what we expect it to do from a pretheoretical perspective on the passive: it suppresses the most prominent argument with structural case, that is, the argument that corresponds to the subject in the active clause.


The standard analysis of verb–auxiliary constructions in German assumes that the main verb and the auxiliary form a verbal complex (Hinrichs & Nakazawa 1994a, Pollard 1994, Müller 1999b, 2002a, Meurers 2000, Kathol 2000). The arguments of the embedded verb are taken over by the auxiliary. For the analysis of the passive this means that the auxiliary has an arg-st that starts with the elements shown in (44). (44) differs from (42) in that a different NP is in first position. If this NP has structural case, it will receive nominative case. If there is no NP with structural case, as in (44c), the case remains as it was, that is, lexically specified.

We cannot go into the analysis of the perfect here. It should be noted, however, that the same lexical item for the participle is used for (45).

	- b. Der the Weltmeister world.champion wurde aux geschlagen. beaten

'The world champion was beaten.'

It is the auxiliary that determines which arguments are realized (Haider 1986a; Müller 2007a: Chapter 17). The lexical rule in (43) licenses a form that can be used both in passive and perfect. Therefore, the vform value is of *ppp*, which stands for *perfect passive participle*.

One should note that this analysis of the passive works without movement of constituents. The problems with the GB analysis do not arise here. Reordering of arguments (see Section 9.4) is independent of passivization. The accusative object is not mentioned at all unlike in GPSG, Categorial Grammar or Bresnan's LFG analysis from before the introduction of Lexical Mapping Theory (see page 234). The passive can be analyzed directly as the suppression of the subject. Everything else follows from interaction with other principles of grammar.

# **9.3 Verb position**

The analysis of verb position that I will present here is based on the GB-analysis. In HPSG, there are a number of different approaches to describe the verb position, however in my opinion, the HPSG variant of the GB analysis is the only adequate one (Müller 2005b,c, 2023a). The analysis of (46) can be summarized as follows: in the verb-initial clauses, there is a trace in verb-final position. There is a special form of the verb in initial position that selects a projection of the verb trace. This special lexical item is licensed by a lexical rule. The connection between the verb and the trace is treated like longdistance dependencies in GPSG via identification of information in the tree or feature structure (structure sharing).

(46) Kennt knows jeder everyone.nom diesen this.acc Roman novel \_ ? 'Does everyone know this novel?'

Figure 9.9 on the next page gives an overview of this. The verb trace in final position behaves just like the verb both syntactically and semantically. The information about the missing word is represented as the value of the feature double slash (abbreviated: dsl). This is a head feature and is therefore passed up to the maximal projection (VP). The verb in initial position has a VP in its comps list which is missing a verb (VP//V). This is the same verb that was the input for the lexical rule and that would normally occur in

Figure 9.9: Analysis of verb position in HPSG

final position. In Figure 9.9, there are two maximal verb projections: *jeder diesen Roman \_* with the trace as the head and *kennt jeder diesen Roman \_* with *kennt* as the head.

This analysis will be explained in more detail in what follows. For the trace in Figure 9.9, one could assume the lexical entry in (47).

(47) Verb trace for *kennt* 'knows':

This lexical entry differs from the normal verb *kennt* only in its phon value. The syntactic aspects of an analysis with this trace are represented in Figure 9.10 on the following page.

The combination of the trace with *diesen Roman* 'this novel' and *jeder* 'everbody' follows the rules and principles that we have encountered thus far. This begs the immediate question as to what licenses the verb *kennt* in Figure 9.10 and what status it has.

If we want to capture the fact that the finite verb in initial position behaves like a complementizer (Höhle 1997), then it makes sense to give head status to *kennt* in Figure 9.10 and have *kennt* select a saturated, verb-final verbal projection. Finite verbs in initial position differ from complementizers in that they require a projection of a verb trace, whereas complementizers need projections of overt verbs:

Figure 9.10: Analysis of *Kennt jeder diesen Roman?* 'Does everyone know this novel?'

	- b. Kennt knows [jeder everybody.nom diesen this.acc Roman novel \_ ] 'Does everybody know this novel?'

It is normally not the case that *kennen* 'know' selects a complete sentence and nothing else as would be necessary for the analysis of *kennt* as the head in (48b). Furthermore, we must ensure that the verbal projection with which *kennt* is combined contains the verb trace belonging to *kennt*. If it could contain a trace belonging to *gibt* 'gives', for example, we would be able to analyze sentences such as (49b):

	- b. \* Kennt knows [der the Mann man der the Frau woman das the Buch book \_]?

In the preceding discussion, the dependency between the fronted verb and the verb trace was expressed by coindexation. In HPSG, identity is always enforced by structure sharing. The verb in initial position must therefore require that the trace has exactly those properties of the verb that the verb would have had, were it in final position. The information that must be shared is therefore all locally relevant syntactic and semantic information, that is, all information under local. Since phon is not part of the local features, it is not shared and this is why the phon values of the trace and verb can differ. Up to now, one crucial detail has been missing in the analysis: the local value of the trace cannot be directly structure-shared with a requirement of the initial verb since the verb *kennt* can only select the properties of the projection of the trace and the comps list of the selected projection is the empty list. This leads us to the problem that was pointed

   

out in the discussion of (49b). It must therefore be ensured that all information about the verb trace is available on the highest node of its projection. This can be achieved by introducing a head feature whose value is identical to the local value of the trace. This feature is referred to as dsl. As was already mentioned above, dsl stands for *double slash*. It is called so because it has a similar function to the slash feature, which we will encounter in the following section.<sup>17</sup> (50) shows the modified entry for the verb trace:

(50) Verb trace of *kennt* (preliminary version): phon ⟨⟩ synsem|loc 1 cat head *verb* vform *fin* dsl 1 comps <sup>D</sup> NP[*nom*] <sup>2</sup> , NP[*acc*] <sup>3</sup> E cont ind 4 rels \* *kennen* event 4 experiencer 2 theme 3 + 

Through sharing of the local value and the dsl value in (50), the syntactic and semantic information of the verb trace is present at its maximal projection, and the verb in initial position can check whether the projection of the trace is compatible.<sup>18</sup>

The special lexical item for verb-initial position is licensed by the following lexical rule:<sup>19</sup>

(i) Karl Karl kennt knows und and liebt loves diese this Schallplatte. record

<sup>17</sup>The feature dsl was proposed by Jacobson (1987a) in the framework of Categorial Grammar to describe head movement in English inversions. Borsley (1989) adopted this idea and translated it into HPSG terms, thereby showing how head movement in a HPSG variant of the CP/IP system can be modeled using dsl. The introduction of the dsl feature to describe head movement processes in HPSG is motivated by the fact that, unlike long-distance dependencies as will be discussed in Section 9.5, this kind of movement is local.

The suggestion to percolate information about the verb trace as part of the head information comes from Oliva (1992).

<sup>18</sup>Note that the description in (50) is cyclic since the tag <sup>1</sup> is used inside itself. See Section 6.5 on cyclic feature descriptions. This cyclic description is the most direct way to express that a linguistic object with certain local properties is missing and to pass this information on along the head path as the value of the dsl feature. This will be even clearer when we look at the final version of the verb trace in (52) on page 299. <sup>19</sup>The lexical rule analysis cannot explain sentences such as (i):

This has to do with the fact that the lexical rule cannot be applied to the result of coordination, which constitutes a complex syntactic object. If we apply the lexical rule individually to each verb, then we arrive at variants of the verbs which would each select verb traces for *kennen* 'to know' and *lieben* 'to love'. Since the cat values of the conjuncts are identified with each other in coordinations, coordinations involving the V1 variants of *kennt* and *liebt* would be ruled out since the dsl values of the selected VPs contain the meaning of the respective verbs and are hence not compatible (Müller 2005b: 13). Instead of a lexical rule, one must assume a unary syntactic rule that applies to the phrase *kennt und liebt* 'knows and loves'. As we have seen, lexical rules in the HPSG formalization assumed here correspond to unary rules such that the difference between (51) and a corresponding syntactic rule is mostly a difference in representation.

#### 9 Head-Driven Phrase Structure Grammar

The verb licensed by this lexical rule selects a maximal projection of the verb trace which has the same local properties as the input verb. This is achieved by the coindexation of the local values of the input verb and the dsl values of the selected verb projection. Only finite verbs in final position (initial−) can be the input for this rule. The output is a verb in initial-position (initial+). The corresponding extended analysis is given in Figure 9.11. V1-LR stands for the verb-initial lexical rule. The lexical rule in (51) licenses

Figure 9.11: Visualization of the analysis of *Kennt jeder diesen Roman?* 'Does everyone know this novel?'

a verb that selects a VP ( 1 in Figure 9.11). The dsl value of this VP corresponds to the local value of the verb that is the input of the lexical rule. Part of the dsl value is also

the valence information represented in Figure 9.11 ( 2 ). Since dsl is a head feature, the dsl value of the VP is identical to that of the verb trace, and since the local value of the verb trace is identified with the dsl value, the comps information of the verb *kennen* is also available at the trace. The combination of the trace with its arguments proceeds exactly as with an ordinary verb.

It would be unsatisfactory if we had to assume a special trace for every verb. Fortunately, this is not necessary as a general trace as in (52) will suffice for the analysis of sentences with verb movement.

(52) General verb trace following Meurers (2000: 206–208): " phon ⟨⟩ synsem|loc 1 - cat|head|dsl 1 #

This may seem surprising at first glance, but if we look closer at the interaction of the lexical rule (51) and the percolation of the dsl feature in the tree, then it becomes clear that the dsl value of the verb projection and therefore the local value of the verb trace is determined by the local value of the input verb. In Figure 9.11, *kennt* is the input for the verb movement lexical rule. The relevant structure sharing ensures that, in the analysis of (46), the local value of the verb trace corresponds exactly to what is given in (50).

The most important points of the analysis of verb position are summarized below:


After discussing the analysis of verb-first sentences, we will now turn to local reordering.

# **9.4 Local reordering**

There are several possibilities for the analysis of constituent order in the middle field: one can assume completely flat structures as in GPSG (Kasper 1994, Bouma & van Noord 1998), or instead assume binary branching structures and allow for arguments to be saturated in any order. A compromise was proposed by Kathol (2001) and Müller (1999b, 2002a, 2004d): binary branching structures with a special list that contains the arguments and adjuncts belonging to one head. The arguments and adjuncts are allowed

to be freely ordered inside such lists. See Reape (1994) and Section 11.7.2.2 of this book for the formal details of these approaches. Both the completely flat analysis and the compromise have proved to be on the wrong track (see Müller 2005b, 2004e and Müller 2007a: Section 9.5.1) and therefore, I will only discuss the analysis with binary branching structures.

Figure 9.12 shows the analysis of (53a).

	- b. [weil] because diesen this.acc Roman novel jeder everyone.nom kennt knows 'because everyone knows this novel'

Figure 9.12: Analysis of constituent order in HPSG: unmarked order

The arguments of the verb are combined with the verb starting with the last element of the comps list, as explained in Section 9.1.2. The analysis of the marked order is shown in Figure 9.13. Both trees differ only in the order in which the elements are taken off

Figure 9.13: Analysis of constituent order in HPSG: marked order

from the comps list: in Figure 9.12, the last element of the comps list is discharged first and in Figure 9.13 the first one is.

The following schema is a revised version of the Head-Complement Schema:

#### **Schema 3 (Head-Complement Schema (binary branching))**

*head-complement-phrase* ⇒ synsem|loc|cat|comps 1 ⊕ 3 head-dtr|synsem|loc|cat|comps 1 ⊕ ⟨ 2 ⟩ ⊕ 3 non-head-dtrs ⟨ [ synsem 2 ] ⟩ 

Whereas in the first version of the Head-Complement Schema it was always the last element from the comps list that was combined with the head, the comps list is divided into three parts using *append*: a list of arbitrary length ( 1 ), a list consisting of exactly one element (⟨ 2 ⟩) and a further list of arbitrary length ( 3 ). The lists 1 and 3 are combined and the result is the comps value of the mother node.

Languages with fixed constituent order (such as English) differ from languages such as German in that they discharge the arguments starting from one side (for more on the subject in English, see Section 9.1.1), whereas languages with free constituent order can combine arguments with the verb in any order. In languages with fixed constituent order, either 1 or 3 is always the empty list. Since German structures are not restricted with regard to 1 or 3 – that is, 1 and 3 can either be the empty list or contain elements – the intuition is captured that there are less restrictions in languages with free constituent order than in languages with fixed order. We can compare this to the Kayneian analysis from Section 4.6.1, where it was assumed that all languages are derived from the base order [specifier [head complement]] (see Figure 4.20 on page 149 for Laenzlinger's analysis of German as an SVO-language (Laenzlinger 2004)). In these kinds of analyses, languages such as English constitute the most basic case and languages with free ordering require some considerable theoretical effort to get the order right. In comparison to that, the analysis proposed here requires more theoretical restrictions if the language has more restrictions on permutations of its constituents. The complexity of the licensed structures does not differ considerably from language to language under an HPSG approach. Languages differ only in the type of branching they have.20,<sup>21</sup>

The analysis presented here utilizing the combination of arguments in any order is similar to that of Fanselow (2001) in the framework of GB/MP as well as the Categorial Grammar analyses of Hoffman (1995: Section 3.1) and Steedman & Baldridge (2006). Gunji proposed similar HPSG analyses for Japanese as early as 1986. See also Kim (2016: 16) for such an analysis of Korean.

<sup>20</sup>This does not exclude that the structures in question have different properties as far as their processability by humans is concerned. See Gibson (1998), Hawkins (1999) and Chapter 15.

<sup>21</sup>Haider (1997b: 18) has pointed out that the branching type of VX languages differs from those of XV languages in analyses of the kind that is proposed here. This affects the c-command relations and therefore has implications for Binding Theory in GB/MP. However, the direction of branching is irrelevant for HPSG analyses as Binding Principles are defined using o-command (Pollard & Sag 1994: Chapter 6) and o-command makes reference to the Obliqueness Hierarchy, that is, the order of elements in the comps list rather than the order in which these elements are combined with the head.

# **9.5 Long-distance dependencies**

The analysis of long-distance dependencies utilizes techniques that were originally developed in GPSG: information about missing constituents is passed up the tree (or feature structure).<sup>22</sup> There is a trace at the position where the fronted element would normally occur. Figure 9.14 shows the analysis of (54).

(54) [Diesen this Roman] novel kennt knows \_ jeder everyone \_ . 'Everyone knows this novel.'

Figure 9.14: Analysis of long-distance dependencies in HPSG

In principle, one could also assume that the object is extracted from its unmarked position (see Section 3.5 on the unmarked position). The extraction trace would then follow the subject:

(55) [Diesen this Roman] novel kennt knows jeder everyone \_ \_ . 'Everyone knows this novel.'

Fanselow (2004c) argues that certain phrases can be placed in the Vorfeld without having a special pragmatic function. For instance, (expletive) subjects in active sentences (56a),

<sup>22</sup>In HPSG, nothing is actually 'passed up' in a literal sense in feature structures or trees. This could be seen as one of the most important differences between deterministic (e.g., HPSG) and derivational theories like transformational grammars (see Section 15.1). Nevertheless, it makes sense for expository purposes to explain the analysis as if the structure were built bottom-up, but linguistic knowledge is independent of the direction of processing. In recent computer implementations, structure building is mostly carried out bottom-up, but there were other systems which worked top-down. The only thing that is important in the analysis of nonlocal dependencies is that the information about the missing element on all intermediate nodes is identical to the information in the filler and the gap.

temporal adverbials (56b), sentence adverbials (56c), dative objects of psychological verbs (56d) and objects in passives (56e) can be placed in the Vorfeld, even though they are neither topic nor focus.

	- b. Am on Sonntag Sunday hat has ein a Eisbär polar.bear einen a Mann man gefressen. eaten 'On Sunday, a polar bear ate a man.'
	- c. Vielleicht perhaps hat has der the Schauspieler actor seinen his Text text vergessen. forgotten 'Perhaps, the actor has forgotton his text.'
	- d. Einem a.dat Schauspieler actor ist is der the.nom Text text entfallen. forgotten 'An actor forgot the text.'
	- e. Einem a.dat Kind child wurde aux das the.nom Fahrrad bike gestohlen. stolen 'A bike was stolen from a child.'

Fanselow argues that information structural effects can be due to reordering in the Mittelfeld. So by ordering the accusative object as in (57), one can reach certain effects:

(57) Kennt knows diesen this Roman novel jeder? everybody 'Does everybody know this novel?'

If one assumes that there are frontings to the *Vorfeld* that do not have information structural constraints attached to them and that information structural constraints are associated with reorderings in the Mittelfeld, then the assumption that the initial element in the Mittelfeld is fronted explains why the examples in (56) are not information structurally marked. The elements in the Vorfeld are unmarked in the initial position in the Mittelfeld as well:

(58) a. Regnet es?

> rains it 'Does it rain?'


So, I assume that the trace of a fronted argument that would not be Mittelfeld-initial in the unmarked order is combined with the head last, as described in Section 9.4. Of course, the same applies to all extracted arguments that would be Mittelfeld-initial in the unmarked order anyway: the traces are combined last with the head as for instance in (59):

(59) [Jeder] everybody kennt knows \_ diesen this Roman novel \_ . 'Everyone knows this novel.'

After this rough characterization of the basic idea, we now turn to the technical details: unlike verb movement, which was discussed in Section 9.3, constituent movement is nonlocal, which is why the two movement types are modeled with different features (slash vs. dsl). dsl is a head feature and, like all other head features, projects to the highest node of a projection (for more on the Head Feature Principle, see page 280). slash, on the other hand, is a feature that belongs to the nonloc features represented under synsem|nonloc. The value of the nonloc feature is a structure with the features inherited (or inher for short) and to-bind. The value of inher is a structure containing information about elements involved in a long-distance dependency. (60) gives the structure assumed by Pollard & Sag (1994: 163):<sup>23</sup>

(60) *nonloc* que *list of npros* rel *list of indices* slash *list of local structures* 

que is important for the analysis of interrogative clauses as is rel for the analysis of relative clauses. Since these will not feature in this book, they will be omitted in what follows. The value of slash is a list of *local* objects.

As with the analysis of verb movement, it is assumed that there is a trace in the position where the accusative object would normally occur and that this trace shares the properties of that object. The verb can therefore satisfy its valence requirements locally. Information about whether there has been a combination with a trace and not with a genuine argument is represented inside the complex sign and passed upward in the tree. The long-distance dependency can then be resolved by an element in the prefield higher in the tree.

<sup>23</sup>Pollard & Sag assume that the values of que, rel, and slash are sets rather than lists. The math behind sets is rather complicated, which is why I assume lists here.

Long-distance dependencies are introduced by the trace, which has a feature corresponding to the local value of the required argument in its slash list. (61) shows the description of the trace as is required for the analysis of (54):

Since traces do not have internal structure (no daughters), they are of type *word*. The trace has the same properties as the accusative object. The fact that the accusative object is not present at the position occupied by the trace is represented by the value of slash.

The following principle is responsible for ensuring that nonloc information is passed up the tree.

### **Principle 3 (Nonlocal Feature Principle)**

In a headed phrase, for each nonlocal feature, the inherited value of the mother is a list that is the concatenation of the inherited values of the daughters minus the elements in the to-bind list of the head daughter.

The Head-Filler Schema (Schema 4) licenses the highest node in Figure 9.15 on the next page.

### **Schema 4 (Head-Filler Schema)**

*head-filler-phrase* ⇒

The schema combines a finite, verb-initial clause (initial+) that has an element in slash with a non-head daughter whose local value is identical to the slash element. In this

#### 9 Head-Driven Phrase Structure Grammar

structure, no arguments are saturated. Nothing can be extracted from the filler daughter itself, which is ensured by the specification of the slash value of the non-head daughter. Figure 9.15 shows a more detailed variant of the analysis of fronting to the prefield. The

Figure 9.15: Analysis of *Diesen Roman kennt jeder.* 'Everyone knows this novel.' combined with the verb movement analysis for verb-initial order

verb movement trace for *kennt* 'knows' is combined with a nominative NP and an extraction trace. The extraction trace stands for the accusative object in our example. The accusative object is described in the comps list of the verb ( 4 ). Following the mechanism for verb movement, the valence information that was originally contained in the entry for *kennt* (⟨ 3 , 4 ⟩) is present on the verb trace. The combination of the projection of the verb trace with the extraction trace works in exactly the same way as for non-fronted arguments. The slash value of the extraction trace is passed up the tree and bound off by the Head-Filler Schema.

(61) provides the lexical entry for a trace that can function as the accusative object of *kennen* 'to know'. As with the analysis of verb movement, it is not necessary to have numerous extraction traces with differing properties listed in the lexicon. A more general entry such as the one in (62) will suffice:

(62) Extraction trace:

This has to do with the fact that the head can satisfactorily determine the local properties of its arguments and therefore also the local properties of the traces that it combines with. The identification of the object in the comps list of the head with the synsem value of the trace coupled with the identification of the information in slash with information about the fronted element serves to ensure that the only elements that can be realized in the prefield are those that fit the description in the comps list of the head. The same holds for fronted adjuncts: since the local value of the constituent in the prefield is identified with the local value of the trace via the slash feature, there is then sufficient information available about the properties of the trace.

The central points of the preceding analysis can be summarized as follows: information about the local properties of a trace is contained in the trace itself and then present on all nodes dominating it until one reaches the filler. This analysis can offer an explanation for so-called extraction path marking languages where certain elements show inflection depending on whether they are combined with a constituent out of which something has been extracted in a long-distance dependency. Bouma, Malouf & Sag (2001) cite Irish, Chamorro, Palauan, Icelandic, Kikuyu, Ewe, Thompson Salish, Moore, French, Spanish, and Yiddish as examples of such languages and provide corresponding references. Since information is passed on step-by-step in HPSG analyses, all nodes intervening in a long-distance dependency can access the elements in that dependency.

# **9.6 New developments and theoretical variants**

This section and the following one is for advanced readers. It can be skipped without problems in understanding the following chapters on other frameworks.

The schemata that were presented in this chapter combine adjacent constituents. The assumption of adjacency can be dropped and discontinuous constituents maybe permitted. Variants of HPSG that allow for discontinuous constituents are usually referred to as *Linearization-based HPSG*. The first formalization was developed by Mike Reape (1991, 1992, 1994). Proponents of linearization approaches are for instance Kathol (1995, 2000), Donohue & Sag (1999), Richter & Sailer (1999b), Crysmann (2008), Beavers & Sag (2004), Sato (2006), Wetta (2011). I also suggested linearization-based analyses (Müller 1999b, 2002a) and implemented a large-scale grammar fragment based on Reape's ideas (Müller 1996c). Linearization-based approaches to the German sentence structure are similar to the GPSG approach in that it is assumed that the verb and arguments and adjuncts are members of the same linearization domain and hence may be realized in any order. For instance, the verb may precede arguments and adjuncts or follow them. Hence, no empty element for the verb in final position is necessary. While this allows for grammars without empty elements for the analysis of the verb position, it is unclear how examples with apparent multiple frontings can be accounted for, while such data can be captured directly in the proposal suggested in this chapter. The whole issue is discussed in more detail in Müller (2023a). I will not explain Reape's formalization here, but defer its discussion until Section 11.7.2.2, where the discontinuous, non-projective structures of some Dependency Grammars are compared to linearization-based HPSG approaches. Apparent multiple frontings and the problems they pose for simple linearization-based approaches are discussed in Section 11.7.1.

# **9.7 Summary and classification**

In HPSG, feature descriptions are used to model all properties of linguistic objects: roots, words, lexical rules and dominance schemata are all described using the same formal tools. Unlike GPSG and LFG, there are no separate phrase structure rules. Thus, although HPSG stands for Head-Driven Phrase Structure Grammar, it is not a phrase structure grammar. In HPSG implementations, a phrase structure backbone is often used to increase the efficiency of processing. However, this is not part of the theory and linguistically not necessary.

HPSG differs from Categorial Grammar in that it assumes considerably more features and also in that the way in which features are grouped plays an important role for the theory.

Long-distance dependencies are not analyzed using function composition as in Categorial Grammar, but instead as in GPSG by appealing to the percolation of information in the tree. In this way, it is possible to analyze pied-piping constructions such as those discussed in Section 8.6 with just one lexical item per relative pronoun, whose relevant local properties are identical to those of the demonstrative pronoun. The relative clause in (63) would be analyzed as a finite clause from which a PP has been extracted:

(63) der the Mann, man [RS [PP an on den] who [S/PP wir we gedacht thought haben]] have 'the man we thought of'

For relative clauses, it is required that the first daughter contains a relative pronoun. This can, as shown in the English examples on page 260, be in fact very deeply embedded. Information about the fact that *an den* 'of whom' contains a relative pronoun is provided in the lexical entry for the relative pronoun *den* by specifying the value of nonloc| inher|rel. The Nonlocal Feature Principle passes this information on upwards so that the information about the relative pronoun is contained in the representation of the phrase *an den*. This information is bound off when the relative clause is put together (Pollard & Sag 1994: Chapter 5; Sag 1997). It is possible to use the same lexical entry for *den* in the analyses of both (63) and (64) as – unlike in Categorial Grammar – the relative pronoun does not have to know anything about the contexts in which it can be used.

(64) der the Mann, man [RS [NP den] that [S/NP wir we kennen]] know 'the man that we know'

Any theory that wants to maintain the analysis sketched here will have to have some mechanism to make information available about the relative pronoun in a complex phrase. If we have such a mechanism in our theory – as is the case in LFG and HPSG – then we can also use it for the analysis of long-distance dependencies. Theories such as LFG and HPSG are therefore more parsimonious with their descriptive tools than other theories when it comes to the analysis of relative phrases.

In the first decade of HPSG history (Pollard & Sag 1987, 1994, Nerbonne, Netter & Pollard 1994), despite the differences already mentioned here, HPSG was still very similar to Categorial Grammar in that it was a strongly lexicalized theory. The syntactic

make-up and semantic content of a phrase were determined by the head (hence the term *head-driven*). In cases where head-driven analyses were not straight-forwardly possible, because no head could be identified in the phrase in question, then it was commonplace to assume empty heads. An example of this is the analysis of relative clauses in Pollard & Sag (1994: Chapter 5). Since an empty head can be assigned any syntactic valence and an arbitrary semantics (for discussion of this point, see Chapter 19), one has not really explained anything as one needs very good reasons for assuming an empty head, for example, that this empty position can be realized in other contexts. This is, however, not the case for empty heads that are only proposed in order to save theoretical assumptions. Therefore, Sag (1997) developed an analysis of relative clauses without any empty elements. As in the analyses sketched for (63) and (64), the relative phrases are combined directly with the partial clause in order to form the relative clause. For the various observable types of relative clauses in English, Sag proposes different dominance rules. His analysis constitutes a departure from strong lexicalism: in Pollard & Sag (1994), there are six dominance schemata, whereas there are 23 in Ginzburg & Sag (2000).

The tendency to a differentiation of phrasal schemata can also be observed in the proceedings of recent conferences. The proposals range from the elimination of empty elements to radically phrasal analyses (Haugereid 2007, 2009).<sup>24</sup>

Even if this tendency towards phrasal analyses may result in some problematic analyses, it is indeed the case that there are areas of grammar where phrasal analyses are required (see Section 21.10). For HPSG, this means that it is no longer entirely head-driven and is therefore neither Head-Driven nor Phrase Structure Grammar.

HPSG makes use of typed feature descriptions to describe linguistic objects. Generalizations can be expressed by means of hierarchies with multiple inheritance. Inheritance also plays an important role in Construction Grammar. In theories such as GPSG, Categorial Grammar and TAG, it does not form part of theoretical explanations. In implementations, macros (abbreviations) are often used for co-occurring feature-value pairs (Dalrymple, Kaplan & King 2004). Depending on the architecture assumed, such macros are not suitable for the description of phrases since, in theories such as GPSG and LFG, phrase structure rules are represented differently from other feature-value pairs (however, see Asudeh, Dalrymple & Toivonen (2008, 2013) for macros and inheritance used for c-structure annotations). Furthermore, there are further differences between types and macros, which are of a more formal nature: in a typed system, it is possible under certain conditions to infer the type of a particular structure from the presence of certain features and of certain values. With macros, this is not the case as they are only abbreviations. The consequences for linguistic analyses entailed by these differences are, however, minimal.

HPSG differs from GB theory and later variants in that it does not assume transformations. In the 80s, representational variants of GB were proposed, that is, it was assumed that there was no D-structure from which an S-structure is created by simultaneous marking of the original position of moved elements. Instead, one assumed the S-structure with traces straight away and the assumption that there were further movements in the mapping of S-structure to Logical Form was also abandoned (Koster 1978; Haider

<sup>24</sup>For discussion, see Müller (2007b) and Section 21.3.6.

1993: Section 1.4; Frey 1993: 14). This view corresponds to the view in HPSG and many of the analyses in one framework can be translated into the other.

In GB theory, the terms subject and object do not play a direct role: one can use these terms descriptively, but subjects and objects are not marked by features or similar devices. Nevertheless it is possible to make the distinction since subjects and objects are usually realized in different positions in the trees (the subject in specifier position of IP and the object as the complement of the verb). In HPSG, subject and object are also not primitives of the theory. Since valence lists (or arg-st lists) are ordered, however, this means that it is possible to associate the arg-st elements to grammatical functions: if there is a subject, this occurs in the first position of the valence list and objects follow.<sup>25</sup> For the analysis of (65b) in a transformation-based grammar, the aim is to connect the base order in (65a) and the derived order in (65b). Once one has recreated the base order, then it is clear what is the subject and what is the object. Therefore, transformations applied to the base structure in (65a) have to be reversed.

	- b. [weil] because diesen this Roman novel jeder everyone kennt knows

In HPSG and also in other transformation-less models, the aim is to assign arguments in the order in (65b) to descriptions in the valence list. The valence list (or arg-st in newer approaches) corresponds in a sense to Deep Structure in GB. The difference is that the head itself is not included in the argument structure, whereas this is the case with Dstructure.

Bender (2008c) has shown how one can analyze phenomena from non-configurational languages such as Wambaya by referring to the argument structure of a head. In Wambaya, words that would normally be counted as constituents in English or German can occur discontinuously; that is, an adjective that semantically belongs to a noun phrase and shares the same case, number and gender values with other parts of the noun phrase can occur in a position in the sentence that is not adjacent to the remaining noun phrase. Nordlinger (1998) has analyzed the relevant data in LFG. In her analysis, the various parts of the constituent refer to the f-structure of the sentence and thus indirectly ensure that all parts of the noun phrase have the same case. Bender adopts a variant of HPSG where valence information is not removed from the valence list after an argument has been combined with its head, but rather this information remains in the valence list and is passed up towards the maximal projection of the head (Meurers 1999c; Przepiórkowski 1999b; Müller 2007a: Section 17.4). Similar proposals were made in GB by Higginbotham (1985: 560) and Winkler (1997). By projecting the complete valence information, it remains accessible in the entire sentence and discontinuous constituents

<sup>25</sup>When forming complex predicates, an object can occur in first position. See Müller (2002a: 157) for the long passive with verbs such as *erlauben* 'allow'. In general, the following holds: the subject is the first argument with structural case.

can refer to it (e.g., via mod) and the respective constraints can be formulated.<sup>26</sup> In this analysis, the argument structure in HPSG corresponds to f-structure in LFG. The extended head domains of LFG, where multiple heads can share the same f-structure, can also be modeled in HPSG. To this end, one can utilize function composition as it was presented in the chapter on Categorial Grammar (see Chapter 8.5.2). The exact way in which this is translated into HPSG cannot be explained here due to space restrictions. The reader is referred to the original works by Hinrichs & Nakazawa (1994a) and the explanation in Müller (2007a: Chapter 15).

Valence information plays an important role in HPSG. The lexical item of a verb in principle predetermines the set of structures in which the item can occur. Using lexical rules, it is possible to relate one lexical item to other lexical items. These can be used in other sets of structures. So one can see the functionality of lexical rules in establishing a relation between sets of possible structures. Lexical rules correspond to transformations in Transformational Grammar. This point is discussed in more detail in Section 19.5. The effect of lexical rules can also be achieved with empty elements. This will also be the matter of discussion in Section 19.5.

In GPSG, metarules were used to license rules that created additional valence patterns for lexical heads. In principle, metarules could also be applied to rules without a lexical head. This is explicitly ruled out by Flickinger (1983) and Gazdar et al. (1985: 59) using a special constraint. Flickinger, Pollard & Wasow (1985: 265) pointed out that this kind of constraint is unnecessary if one uses lexical rules rather than metarules since the former can only be applied to lexical heads.

For a comparison of HPSG and Stabler's Minimalist Grammars, see Section 4.6.4. Torr's implementation of Minimalist Grammars is discussed in Section 4.7.2 on pages 177–180.

### **Comprehension questions**

	- (66) Dem the.dat Mann man wurde aux ein a.nom Buch book geschenkt. given 'The man was given a book.'

<sup>26</sup>See also Müller (2008) for an analysis of depictive predicates in German and English that makes reference to the list of realized or unrealized arguments of a head, respectively. This analysis is also explained in Section 18.2.

# **Exercises** 1. Give a feature description for (67) ignoring *dass*. (67) [dass] that Max Max lacht laughs 2. The analysis of the combination of a noun with a modifying adjective in Section 9.1.7 was just a sketch of an analysis. It is, for example, not explained how one can ensure that the adjective and noun agree in case. Consider how it would be possible to expand such an analysis so that the adjective-noun

	- b. \* eines an.gen interessanter interesting.nom Romans novel.gen

combination in (68a) can be analyzed, but not the one in (68b):

# **Further reading**

Here, the presentation of the individual parts of the theory was – as with other theories – kept relatively short. For a more comprehensive introduction to HPSG, including motivation of the feature geometry, see Müller (2007a). In particular, the analysis of the passive was sketched in brief here. The entire story including the analysis of unaccusative verbs, adjectival participles, modal infinitives as well as diverse passive variants and the long passive can be found in Müller (2002a: Chapter 3) and Müller (2007a: Chapter 17).

Overviews of HPSG can be found in Levine & Meurers (2006), Przepiórkowski & Kupść (2006), Bildhauer (2014) and Müller (2015a). Language Science Press published a large handbook on HPSG (Müller, Abeillé, Borsley & Koenig 2021) containing chapters on foundational assumptions, the history of the framework, various syntactic phenomena, non-syntactic levels of description like morphology, semantics, information structure, dialog and the comparison with other frameworks (Minimalism, Categorial Grammar, Construction Grammar, Lexical Functional Grammar, Dependency Grammar).

Müller (2014b) and Müller & Machicao y Priemer (2019) are two papers in collections comparing frameworks. The first one is in German and contains an analysis of a newspaper text.*<sup>a</sup>* The second one is in English and contains a general description of the framework and a detailed analysis of the sentence in (69):*<sup>b</sup>*

(69) After Mary introduced herself to the audience, she turned to a man that she had met before.

The books are similar to this one in that the respective authors describe a shared set of phenomena within their favorite theories but the difference is that the descriptions come straight from the horse's mouth. Especially the newspaper text is interesting since for some theories it was the first time for them to be applied to real live data. As a result of this one sees phenomena covered that are rarely treated in the rest of the literature.

*a* See https://hpsg.hu-berlin.de/~stefan/Pub/artenvielfalt.html for the example sentences and some interactive analyses of the examples.

*b* See https://hpsg.hu-berlin.de/~stefan/Pub/current-approaches-hpsg.html for an interactive analysis of the example.

# **10 Construction Grammar**

Like LFG and HPSG, *Construction Grammar* (CxG) forms part of West Coast linguistics. It has been influenced considerably by Charles Fillmore, Paul Kay and George Lakoff (all three at Berkeley) and Adele Goldberg (who completed her PhD in Berkeley and is now in Princeton) (Fillmore 1988, Fillmore, Kay & O'Connor 1988, Kay & Fillmore 1999, Kay 2002, 2005, Goldberg 1995, 2006).

Fillmore, Kay, Jackendoff and others have pointed out the fact that, to a large extent, languages consist of complex units that cannot straightforwardly be described with the tools that we have seen thus far. In frameworks such as GB, an explicit distinction is made between core grammar and the periphery (Chomsky 1981a: 8), whereby the periphery is mostly disregarded as uninteresting when formulating a theory of Universal Grammar. The criticism leveled at such practices by CxG is justified since what counts as the 'periphery' sometimes seems completely arbitrary (Müller 2014c) and no progress is made by excluding large parts of the language from the theory just because they are irregular to a certain extent.

In Construction Grammar, idiomatic expressions are often discussed with regard to their interaction with regular areas of grammar. Kay & Fillmore (1999) studied the *What's X doing Y?*-construction in their classic essay. (1) contains some examples of this construction:

	- b. What do you think your name is doing in my book?

The examples show that we are clearly not dealing with the normal meaning of the verb *do*. As well as the semantic bleaching here, there are particular morphosyntactic properties that have to be satisfied in this construction. The verb *do* must always be present and also in the form of the present participle. Kay and Fillmore develop an analysis explaining this construction and also capturing some of the similarities between the WXDY-construction and the rest of the grammar.

There are a number of variants of Construction Grammar:


The aim of Construction Grammar is to both describe and theoretically explore language in its entirety. In practice, however, irregularities in language are often given far more importance than the phenomena described as 'core grammar' in GB. Construction Grammar analyses usually analyze phenomena as phrasal patterns. These phrasal patterns are represented in inheritance hierarchies (e.g., Croft 2001, Goldberg 2003b). An example for the assumption of a phrasal construction is Goldberg's analysis of resultative constructions. Goldberg (1995) and Goldberg & Jackendoff (2004) argue for the construction status of resultatives. In their view, there is no head in (2) that determines the number of arguments.

(2) Willy watered the plants flat.

The number of arguments is determined by the construction instead, that is, by a rule or schema saying that the subject, verb, object and a predicative element must occur together and that the entire complex has a particular meaning. This view is fundamentally different from analyses in GB, Categorial Grammar, LFG<sup>1</sup> and HPSG. In the aforementioned theories, it is commonly assumed that arguments are always selected by lexical heads and not independently licensed by phrasal rules. See Simpson (1983), Neeleman (1994), Wunderlich (1997), Wechsler (1997), and Müller (2002a) for corresponding work in LFG, GB, Wunderlich's Lexical Decomposition Grammar and HPSG.

Like the theories discussed in Chapters 5–9, CxG is also a non-transformational theory. Furthermore, no empty elements are assumed in most variants of the theory and the assumption of lexical integrity is maintained as in LFG and HPSG. It can be shown that these assumptions are incompatible with phrasal analyses of resultative constructions (see Section 21.2.2 and Müller 2006, 2007b). This point will not be explained further here. Instead, I will discuss the work of Fillmore and Kay to prepare the reader to be able to read the original articles and subsequent publications. Although the literature on Construction Grammar is now relatively vast, there is very little work on the basic formal assumptions or analyses that have been formalized precisely. Examples of more formal works are Kay & Fillmore (1999), Kay (2002), Michaelis & Ruppenhofer (2001), and Goldberg (2003b). Another formal proposal was developed by Jean-Pierre Koenig (1999) (formerly Berkeley). This work is couched in the framework of HPSG, but it has been heavily influenced by CxG. Fillmore and Kay's revisions of their earlier work took place in close collaboration with Ivan Sag. The result was a variant of HPSG known as Sign-Based Construction Grammar (SBCG) (Sag 2010, 2012). See Section 10.6.2 for further discussion.

John Bryant, Nancy Chang, Eva Mok have developed a system for the implementation of Embodied Construction Grammar (Bryant 2003). Luc Steels is working on the simulation of language evolution and language acquisition (Steels 2003). Steels works experimentally modeling virtual communities of interacting agents. Apart from this he

<sup>1</sup> See Alsina (1996) and Asudeh, Dalrymple & Toivonen (2008, 2013), however. For more discussion of this point, see Sections 21.1.3 and 21.2.2.

uses robots that interact in language games (Steels 2015). In personal communication (p. c. 2007) Steels stated that it is a long way to go until robots finally will be able to learn to speak but the current state of the art is already impressive. Steels can use robots that have a visual system (camera and image processing) and use visual information paired with audio information in simulations of language acquisition. The implementation of Fluid Construction Grammar is documented in Steels (2011) and Steels (2012). The second book contains parts about German, in which the implementation of German declarative clauses and *w* interrogative clauses is explained with respect to topological fields (Micelli 2012). The FCG system, various publications and example analyses are available at: http://www.fcg-net.org/. Jurafsky (1996) developed a Construction Grammar for English that was paired with a probabilistic component. He showed that many performance phenomena discussed in the literature (see Chapter 15 on the Competence/Performance Distinction) can be explained with recourse to probabilities of phrasal constructions and valence properties of words. Bannard, Lieven & Tomasello (2009) use a probabilistic context-free grammar to model grammatical knowledge of two and three year old children.

Lichte & Kallmeyer (2017) show that their version of Tree-Adjoining Grammar can be seen as a formalization of several tenets of Construction Grammar. See for example the analysis of idioms explained in Figure 18.6 on page 566.

# **10.1 General remarks on the representational format**

In this section, I will discuss the mechanisms of Berkeley Construction Grammar (BCG). As I pointed out in Müller (2006), there are fundamental problems with the formalization of BCG. The details will be given in Section 10.6.1. While the framework was developed further into Sign-Based Construction Grammar (see Section 10.6.2) by its creators Kay and Fillmore, there are still authors working in the original framework (for instance Fried 2013). I will therefore present the basic mechanisms here to make it possible to understand the original ideas and put them into a broader context.

As we saw in Section 9.1.2, dominance relations in HPSG are modeled like other properties of linguistic objects using feature-value pairs. In general, CxG uses feature-value pairs to describe linguistic objects, but dominance relations are represented by boxes (Kay & Fillmore 1999, Goldberg 2003b):

The structure can be written using feature-value pairs as follows:

(4) phon ⟨ *the man* ⟩ dtrs ⟨ [ phon ⟨ *the* ⟩ ], [ phon ⟨ *man* ⟩ ] ⟩ 

# **10.1.1 The head-complement construction**

Kay & Fillmore (1999) assume the following construction for the combination of heads with their complements:

(5) Head plus Complements Construction (HC)

A head is combined with at least one complement (the '+' following the box stands for at least one sign that fits the description in that box). loc+ means that this element must be realized locally. The value of role tells us something about the role that a particular element plays in a construction. Unfortunately, here the term *filler* is used somewhat differently than in GPSG and HPSG. Fillers are not necessarily elements that stand in a long-distance dependency to a gap. Instead, a *filler* is a term for a constituent that fills the argument slot of a head.

The verb phrase construction is a sub-construction of the head-complement construction:

(6) Verb phrase Construction:

The syntactic category of the entire construction is V. Its complements cannot have the grammatical function subject.

The VP construction is a particular type of head-complement construction. The fact that it has much in common with the more general head-complement construction is represented as follows:

(7) Verb phrase Construction with inheritance statement:


This representation differs from the one in HPSG, aside from the box notation, only in the fact that feature descriptions are not typed and as such it must be explicitly stated in the representation from which superordinate construction inheritance takes place. HPSG – in addition to the schemata – has separate type hierarchies specifying the inheritance relation between types.

# **10.1.2 Representation of valence information**

In Kay and Fillmore, valence information is represented in a set (val). The Valence Principle states that local filler-daughters have to be identified with an element in the valence set of the mother.<sup>2</sup> The Subset Principle states that the set values of the head-daughter are subsets of the corresponding sets of the mother. This is the exact opposite approach to the one taken in Categorial Grammar and HPSG. In HPSG grammars, valence lists at the mother nodes are shorter, whereas in Berkeley CxG at least as many elements are present on the mother node as on the head-daughter.

# **10.1.3 Semantics**

Semantics in CxG is handled exactly the same way as in HPSG: semantic information is contained in the same feature structure as syntactic information. The relation between syntax and semantics is captured by using the same variable in the syntactic and semantic description. (8) contains a feature description for the verb *arrive*:

(8) Lexical entry for *arrive* following Kay & Fillmore (1999: 11):

cat v sem (" I frame ARRIVE args { A } #) val [ sem { A } ] 

Kay & Fillmore (1999: 9) refer to their semantic representations as a notational variant of the Minimal Recursion Semantics of Copestake, Flickinger, Pollard & Sag (2005). In later works, Kay (2005) explicitly uses MRS. As the fundamentals of MRS have already been discussed in Section 9.1.6, I will not repeat them here. For more on MRS, see Section 19.3.

# **10.1.4 Adjuncts**

For the combination of heads and modifiers, Kay and Fillmore assume further phrasal constructions that are similar to the verb phrase constructions discussed above and create a relation between a head and a modifier. Kay and Fillmore assume that adjuncts also contribute something to the val value of the mother node. In principle, val is nothing more than the set of all non-head daughters in a tree.

# **10.2 Passive**

The passive has been described in CxG by means of so-called linking constructions, which are combined with lexical entries in inheritance hierarchies. In the base lexicon, it is only listed which semantic roles a verb fulfils and the way in which these are realized is determined by the respective linking constructions with which the basic lexical entry

<sup>2</sup> Sets in BCG work differently from those used in HPSG. A discussion of this is deferred to Section 10.6.1.

#### 10 Construction Grammar

Figure 10.1: Passive and linking constructions

is combined. Figure 10.1 gives an example of a relevant inheritance hierarchy. There is a linking construction for both active and passive as well as lexical entries for *read* and *eat*. There is then a cross-classification resulting in an active and a passive variant of each verb.

The idea behind this analysis goes back to work by Fillmore and Kay between 1995 and 1997<sup>3</sup> , but variants of this analysis were first published in Koenig (1999: Chapter 3) and Michaelis & Ruppenhofer (2001: Chapter 4). Parallel proposals have been made in TAG (Candito 1996; Clément & Kinyon 2003: 188; Kallmeyer & Osswald 2012: 171–172) and HPSG (Koenig 1999, Davis & Koenig 2000, Kordoni 2001).

Michaelis & Ruppenhofer (2001: 55–57) provide the following linking constructions:<sup>4</sup>

	- b. the *Subject Construction*: syn - cat *v* val h role - gf *subj*i
	- c. the *Passive Construction*:


<sup>3</sup> http://www.icsi.berkeley.edu/~kay/bcg/ConGram.html. 2018-02-20.

<sup>4</sup> In the original version of the transitive construction in (9a), there is a feature that has the value da−, however, da is a feature itself and − is the value. I have corrected this in (9a) accordingly.

In the following structures, gf stands for *grammatical function* and da for *distinguished argument*. The distinguished argument usually corresponds to the subject in an active clause.

The structure in (9a) says that the valence set of a linguistic object that is described by the transitive construction has to contain an element that has the grammatical function *object* and whose da value is '−'. The da value of the argument that would be the subject in an active clause is '+' and '−' for all other arguments. The subject construction states that an element of the valence set must have the grammatical function *subject*. In the passive construction, there has to be an element with the grammatical function *oblique* that also has the da value '+'. In the passive construction the element with the da value '+' is realized either as a *by*-PP or not at all (*zero*).

The interaction of the constructions in (9) will be explained on the basis of the verb *schlagen* 'to beat':

(10) Lexical entry for *schlag*- 'beat': syn - cat *v* val " role *agent* da + # *,* h role - *patient*i 

If we combine this lexical entry with the transitive and subject constructions, we arrive at (11a) following Fillmore, Kay, Michaelis, and Ruppenhofer, whereas combining it with the subject and passive construction yields (11b):<sup>5</sup>

 

(11) a. *schlag*- + Subject and Transitive Construction:


 b. *schlag*- + Subject and Passive Construction:

 syn cat *v* form *PastPart* val role *agent* gf *obl* da + syn P[von]/*zero ,* " role *patient* gf *subj* # 

Using the entries in (11), it is possible to analyze the sentences in (12):

(12) a. Er he schlägt beats den the Weltmeister. world.champion 'He is beating the world champion.'

<sup>5</sup> This assumes a particular understanding of set unification. For criticism of this, see Section 10.6.1.

#### 10 Construction Grammar

b. Der the Weltmeister world.champion wird is (von by ihm) him geschlagen. beaten 'The world champion is being beaten (by him).'

This analysis is formally inconsistent as set unification cannot be formalized in such a way that the aforementioned constructions can be unified (Müller 2006; Müller 2007a: Section 7.5.2, see also Section 10.6.1 below). It is, however, possible to fix this analysis by using the HPSG formalization of sets (Pollard & Sag 1987, Pollard & Moshier 1990). The Subject, Transitive and Passive Constructions must then be modified such that they can say something about what an element in val looks like, rather than specifying the val value of a singleton set.

(13) The *Subject Construction* with Pollard & Moschier's definition of sets: syn|cat v val 1 ∧ h role - gf *subj*i ⊂ 1

The restriction in (13) states that the valence set of a head has to contain an element that has the grammatical function *subj*. By these means, it is possible to suppress arguments (by specifying syn as *zero*), but it is not possible to add any additional arguments to the fixed set of arguments of *schlagen* 'to beat'.<sup>6</sup> For the analysis of Middle Constructions such as (14), inheritance-based approaches do not work as there is no satisfactory way to add the reflexive pronoun to the valence set:<sup>7</sup>

(14) Das the Buch book liest reads sich refl gut. good 'The book reads well / is easy to read.'

If we want to introduce additional arguments, we require auxiliary features. An analysis using auxiliary features has been suggested by Koenig (1999). Since there are many argument structure changing processes that interact in various ways and are linked to particular semantic side-effects, it is inevitable that one ends up assuming a large number of syntactic and semantic auxiliary features. The interaction between the various linking constructions becomes so complex that this analysis also becomes cognitively implausible and has to be viewed as technically unusable. For a more detailed discussion of this point, see Müller (2007a: Section 7.5.2).

This solution cannot be applied to the recursive processes we will encounter in a moment such as causativization in Turkish, unless one wishes to assume infinite valence sets.

<sup>6</sup> Rather than requiring that *schlagen* 'to beat' has exactly two arguments as in HPSG, one could also assume that the constraint on the main lexical item would be of the kind in (11a). One would then require that *schlagen* has at least the two members in its valence set. This would complicate everything considerably and furthermore it would not be clear that the subject referred to in (13) would be one of the arguments that are referred to in the description of the lexical item for *schlagen* in (11a).

<sup>7</sup>One technically possible solution would be the following: one could assume that verbs that occur in middle constructions always have a description of a reflexive pronoun in their valence set. The Transitive Construction would then have to specify the syn value of the reflexive pronoun as *zero* so that the additional reflexive pronoun is not realized in the Transitive Construction. The middle construction would suppress the subject, but realizes the object and the reflexive.

The following empirical problem is much more serious: some processes like passivization, impersonalization and causativization can be applied in combination or even allow for multiple application, but if the grammatical function of a particular argument is determined once and for all by unification, additional unifications cannot change the initial assignment. We will first look at languages which allow for a combination of passivization and impersonalization, such as Lithuanian (Timberlake 1982: Section 5), Irish (Noonan 1994), and Turkish (Özkaragöz 1986; Knecht 1985: Section 2.3.3). I will use Özkaragöz's Turkish examples in (15) for illustration (1986: 77):

	- b. Bu this oda-da room-loc döv-ül-ün-ür. hit-pass-pass-aor 'One is beaten (by one) in this room.'
	- c. Harp-te war-loc vur-ul-un-ur. shoot-pass-pass-aor 'One is shot (by one) in war.'


Approaches that assume that the personal passive is the unification of some general structure with some passive-specific structure will not be able to capture double passivization or passivization + impersonalization since they have committed themselves to a certain structure too early. The problem for nontransformational approaches that state syntactic structure for the passive is that such a structure, once stated, cannot be modified. That is, we said that the underlying object is the subject in the passive sentence. But in order to get the double passivization/passivization + impersonalization, we have to suppress this argument as well. What is needed is some sort of process (or description) that takes a representation and relates it to another representation with a suppressed subject. This representation is related to a third representation which again suppresses the subject resulting in an impersonal sentence. In order to do this one needs different strata as in Relational Grammar (Timberlake 1982, Özkaragöz 1986), metarules (Gazdar, Klein, Pullum & Sag 1985), lexical rules (Dowty, 1978: 412; 2003: Section 3.4; Bresnan 1982b, Pollard & Sag 1987, Blevins 2003, Müller 2003b), transformations (Chomsky 1957), or just a morpheme-based morphological analysis that results in items with different valence properties when the passivization morpheme is combined with a head (Chomsky 1981a).

The second set of problematic data that will be discussed comes from causativization in Turkish (Lewis 1967: 146):

<sup>8</sup>According to Özkaragöz, the data is best captured by an analysis that assumes that the passive applies to a passivized transitive verb and hence results in an impersonal passive. The cited authors discussed their data as instances of double passivization, but it was argued by Blevins (2003) that these and similar examples from other languages are impersonal constructions that can be combined with personal passives.

#### 10 Construction Grammar

(16) öl-dür-t-tür-t-

'to cause someone to cause someone to cause someone to kill someone' (kill = cause someone to die)

The causative morpheme -*t* is combined four times with the verb (*tür* is an allomorph of the causative morpheme). This argument structure-changing process cannot be modeled in an inheritance hierarchy since if we were to say that a word can inherit from the causative construction three times, we would still not have anything different to what we would have if the inheritance via the causative construction had applied only once. For this kind of phenomenon, we would require rules that relate a linguistic object to another, more complex object, that is, lexical rules (unary branching rules which change the phonology of a linguistic sign) or binary rules that combine a particular sign with a derivational morpheme. These rules can semantically embed the original sign (that is, add *cause* to *kill*).

The problem of repeated combination with causativization affixes is an instance of a more general problem: derivational morphology cannot be handled by inheritance as was already pointed out by Krieger & Nerbonne (1993) with respect to cases like *preprepreversion*.

If we assume that argument alternations such as passive, causativization and the Middle Construction should be described with the same means across languages, then evidence from Lithuanian and Turkish form an argument against inheritance-based analyses of the passive (Müller 2006, 2007b, Müller & Wechsler 2014a). See also Section 21.2.2 for the discussion of an inheritance-based approach to passive in LFG and Section 21.4.2 for the discussion of an inheritance-based approach in Simpler Syntax.

# **10.3 Verb position**

At present, I only know of two publications in the framework of CxG dealing with German clause structure: Micelli (2012) and Welke (2019). Micelli (2012) describes a computer implementation of a German grammar in Fluid Construction Grammar. This fragment is restricted to declarative V2-clauses and *wh*-questions. In her analysis, the middle field forms a constituent comprising exactly two constituents (the direct and indirect object).<sup>9</sup> The right sentence bracket and the postfield are empty. Long-distance dependencies are not discussed. It is only possible for arguments of the verb in the left sentence bracket to occur in the prefield. Micelli's work is an interesting starting point, but one will have to wait and see how the analysis will be modified when the grammar fragment is expanded.

Welke (2019: 283) assumes argument structure constructions like the one in (17):

$$\text{(17)} \quad \left| \text{ Nom}\_{1/Ag} - \text{Akk}\_{2/Pat} \right| $$

Welke states that arguments of a verb come in a fixed order in what he calls the primary perspectivization. So in (17), nominative comes before accusative. In addition to

<sup>9</sup>Note that none of the constituent tests that were discussed in Section 1.3 justifies such an analysis and that no other theory in this book assumes the *Mittelfeld* to be a constituent.

the primary perspectivization, there may be a secondary perspectivization that allows for alternative orders like accusative before nominative. He deals with the verb position in Section 7.3: his proposal assumes that certain positions are reserved for verbs in argument structure constructions like (17). He gives the following extended schemata that include argument structure information and information about the verb position:

$$\begin{array}{cc} \text{(18)} & \text{a. } \boxed{-\text{ Arg } \text{Arg } \text{Arg }}\\ & \text{b. } \boxed{\text{Arg } - \text{Arg } \text{Arg}}\\ & \text{c. } \boxed{\text{Arg } \text{Arg } \text{Arg } -}\\ \end{array}$$

(18a) is the schema for verb-initial clauses, (18b) the schema for verb-second clauses and (18c) the schema for verb-final clauses. These schemata stand for argument structure constructions with exactly three arguments, but of course there are others with fewer or more arguments. Welke (2019: Section 7.4) notes that modifiers can be placed anywhere between the arguments in German. He assumes that there is a fusion process that can insert a modifier into argument structure constructions. Welke notes that inserting modifiers into constructions like (18) causes problems with the verb-second property of German since a pattern Modifier Arg Verb Arg Arg is V3 rather than V2. He hence concludes that a generalized construction consisting of verb(s), arguments and modifiers is needed: a construction that corresponds to the topological fields model of the German clause (Welke 2019: Section 7.5). Details about the integration of the semantics of the adjuncts, about the interaction of the topological fields schemata with argument structure constructions and about the analysis of nonlocal dependencies are not provided.

So, it must be said that neither Micelli's analysis nor Welke's provides fully worked out proposals of the phenomena discussed in this book and hence I will not discuss them any further, but instead explore some of the possibilities for analyzing German sentence structure that are at least possible in principle in a CxG framework. Since there are neither empty elements nor transformations, the GB and HPSG analyses from Chapter 3 and 9 as well as their variants in Categorial Grammar are ruled out. The following options remain:


Different variants of CxG make different assumptions about how abstract constructions can be. In Categorial Grammar, we have very general combinatorial rules which combine possibly complex signs without adding any meaning of their own (see rule (2) on page 248 for example). (19) shows an example in which the abstract rule of forward application was used:

(19) [[[[Gibt] give der the Mann] man der the Frau] woman das the Buch] book 'Does the man give the woman the book?'

If we do not want these kinds of abstract combinatorial rules, then this analysis must be excluded.

The LFG analysis in Section 7.3 is probably also unacceptable on a CxG view, as it is assumed in this analysis that *der Mann der Frau das Buch* forms a VP, although only three NPs have been combined. CxG has nothing like the theory of extended head domains that was presented in Section 7.3.

Thus, both variants with binary-branching structures are ruled out and only the analysis with flat branching structures remains. Sign-based CxG, which is a variant of HPSG (Sag 2010: 486), as well as Embodied Construction Grammar (Bergen & Chang 2005: 156) allow for a separation of immediate dominance and linear order so that it would be possible to formulate a construction which would correspond to the dominance rule in (20) for transitive verbs:<sup>10</sup>

(20) S → V, NP, NP

Here, we have the problem that adjuncts in German can occur between any of the arguments. In GPSG, adjuncts are introduced by metarules. In formal variants of CxG, lexical rules, but not metarules, are used.<sup>11</sup> If one does not wish to expand the formalism to include metarules, then there are three options remaining:


Kasper (1994) has proposed an analysis of the first type in HPSG: adjuncts and arguments are combined with the head in a flat structure. This corresponds to the dominance rule in (21), where the position of adjuncts is not stated by the dominance rule.

(21) S → V, NP, NP, Adj\*

If we want to say something about the meaning of the entire construction, then one has to combine the original construction (transitive, in the above example) with the semantics contributed by each of the adjuncts. These computations are not trivial and require relational constraints (small computer programs), which should be avoided if there are conceptually simpler solutions for describing a particular phenomenon.

<sup>10</sup>In principle, this is also Micelli's analysis, but she assumed that the middle field forms a separate constituent.

<sup>11</sup>Goldberg (2014: 116) mentions metarule-like devices and refers to Cappelle (2006). The difference between metarules and their CxG variant as envisioned by Cappelle and Goldberg is that in CxG, two constructions are related without one construction being basic and the other one derived. Rather, there exists a mutual relation between two constructions.

The alternative would be to use discontinuous constructions. Analyses with discontinuous constituents have been proposed in both HPSG (Reape 1994) and Embodied Construction Grammar (Bergen & Chang 2005). If we apply Bergen and Chang's analysis to German, the italicized words in (22) would be part of a ditransitive construction.

(22) *Gibt* gives *der* the *Mann* man morgen tomorrow *der* the *Frau* woman unter under der the Brücke bridge *das* the *Geld*? money 'Is the man going to give the woman the money under the bridge tomorrow?'

The construction has been realized discontinuously and the adjuncts are inserted into the gaps. In this kind of approach, one still has to explain how the scope of quantifiers and adjuncts is determined. While this may be possible, the solution is not obvious and has not been worked out in any of the CxG approaches to date. For further discussions of approaches that allow for discontinuous constituents see Section 11.7.2.2.

# **10.4 Local reordering**

If we assume flat branching structures, then it is possible to use the GPSG analysis for the order of arguments. However, Kay (2002) assumes a phrasal construction for so-called Heavy-NP-Shift in English, which means that there is a new rule for the reordering of heavy NPs in English rather than one rule and two different ways to linearize the daughters.

In CxG, it is often argued that the usage contexts of certain orders differ and we therefore must be dealing with different constructions. Accordingly, one would have to assume six constructions to capture the ordering variants of sentences with ditransitive verbs in final position (see also page 187). An alternative would be to assume that the ordering variants all have a similar structure and that the information-structural properties are dependent on the position of constituents in the respective structure (see De Kuthy 2000 for German and Bildhauer 2008 for Spanish).

# **10.5 Long-distance dependencies**

Kay & Fillmore (1999: Section 3.10) discuss long-distance dependencies in their article. Since the number of arguments is not specified in the verb phrase construction, it is possible that an argument of the verb is not locally present. Like the LFG and GPSG analyses in previous chapters, there are no empty elements assumed for the analysis of long-distance dependencies. In the *Left Isolation Construction* that licenses the entire sentence, there is a left daughter and a right daughter. The left daughter corresponds to whatever was extracted from the right daughter. The connection between the fronted element and the position where it is missing is achieved by the operator VAL. VAL provides all elements of the valence set of a linguistic object as well as all elements in the valence set of these elements and so on. It is thereby possible to have unrestricted access to an argument or adjunct daughter of any depth of embedding, and then identify the fronted

constituent with an open valence slot.<sup>12</sup> This approach corresponds to the LFG analysis of Kaplan & Zaenen (1989) based on functional uncertainty.

# **10.6 New developments and theoretical variants**

This section and the following one is for advanced readers. It can be skipped without problems in understanding the following chapters on other frameworks.

Berkeley Construction Grammar was already discussed in the main part of this chapter. The discussion of the formal underpinnings was deferred until the theoretical variants section, since it is more advanced. I made some comments on set unification in Müller (2006: 858), but the longer discussion is only available in Müller (2007a: Section 7.5.2), which is in German. Therefore, I include Section 10.6.1 here, which discusses the formal underpinnings of Berkeley Construction Grammar in more detail and shows that they are not suited for what they were intended to do.

Section 10.6.2 discusses Sign-Based Construction Grammar, which was developed in joint work by Charles Fillmore, Paul Kay and Ivan Sag. It embodies ideas from BCG without having its formal flaws. Section 10.6.3 deals with Embodied Construction Grammar, which is based on work by Charles Fillmore, Paul Kay and George Lakoff. Section 10.6.4 deals with Fluid Construction Grammar.

## **10.6.1 Berkeley Construction Grammar**

Section 10.2 discussed the valence representation in BCG and linking constructions for active and passive. Kay & Fillmore (1999) represent valence information in sets and I deferred the discussion of the formal properties of sets in BCG to this section. Fillmore and Kay's assumptions regarding set unification differ fundamentally from those that are made in HPSG. Kay and Fillmore assume that the unification of the set { a } with the set { b }, where a and b do not unify, results in the union of the two sets, that is { a, b }. Due to this special understanding of sets it is possible to increase the number of elements in a set by means of unification. The unification of two sets that contain compatible elements is the disjunction of sets that contain the respective unifications of the compatible elements. This sounds complicated, but we are only interested in a specific case: the unification of an arbitrary set with a singleton set:

(23) { NP[*nom*], NP[*acc*] } ∧ { NP[*nom*] } = { NP[*nom*], NP[*acc*] }

According to Fillmore & Kay the unification of a set with another set that contains a compatible element does not result in an increase of the number of list elements. (24) illustrates another possible case:

(24) { NP, NP[*acc*] } ∧ { NP[*nom*] } = { NP[*nom*], NP[*acc*] }

<sup>12</sup>Note again, that there are problems with the formalization of this proposal in Kay & Fillmore's paper. The formalization of VAL, which was provided by Andreas Kathol, seems to presuppose a formalization of sets as the one that is used in HPSG, but the rest of Fillmore & Kay's paper assumes a different formalization, which is inconsistent. See Section 10.6.1.

The first NP in (24) is underspecified with respect to its case. The case of the NP in the second set is specified as nominative. NP[*nom*] does not unify with NP[*acc*] but with NP.

This particular conception of unification has consequences. Unification is usually defined as follows:

(25) The unification of two structures FS<sup>1</sup> and FS<sup>2</sup> is the structure FS<sup>3</sup> that is subsumed by both FS<sup>1</sup> and FS<sup>2</sup> where there is no other structure that subsumes FS<sup>1</sup> and FS<sup>2</sup> and is subsumed by FS<sup>3</sup> .

A structure FS<sup>1</sup> is said to subsume FS<sup>3</sup> iff FS<sup>3</sup> contains all feature value pairs and structure sharings from FS<sup>1</sup> . FS<sup>3</sup> may contain additional feature value pairs or structure sharings. The consequence is that the subsumption relations in (26b,c) have to hold if unification of valence sets works as in (26a):

(26) Properties of the set unification according to Kay & Fillmore (1999): a. { NP[*nom*] } ∧ { NP[*acc*] } = { NP[*nom*], NP[*acc*] } b. { NP[*nom*] } ⪰ { NP[*nom*], NP[*acc*] } c. { NP[*acc*] } ⪰ { NP[*nom*], NP[*acc*] }

(26b) means that a feature structure with a valence set that contains just one NP[*nom*] is more general than a feature structure that contains both an NP[*nom*] and an NP[*acc*]. Therefore the set of transitive verbs is a subset of the intransitive verbs. This is rather unintuitive, but compatible with Fillmore & Kay's system for the licensing of arguments. However, there are problems with the interaction of valence specifications and linking constructions, which we turn to now.

We have seen the result of combining lexical items with linking constructions in (11a) and (11b), but the question of how these results are derived has not been addressed so far. Kay (2002) suggests an automatic computation of all compatible combinations of maximally specific constructions. Such a procedure could be used to compute the lexical representations we saw in Section 10.2 and these could be then used to analyze the wellformed sentences in (12).

However, problems would result for ungrammatical sentences like (27b). *grauen* 'to dread' is a subjectless verb. If one would simply combine all compatible linking constructions with *grauen*, the Kay & Fillmoreian conception of set unification would cause the introduction of a subject into the valence set of *grauen*. (27b) would be licensed by the grammar:

	- b. \* Ich I graue dread dem the.dat Student student vor before der the Prüfung. exam

One could solve this problem by specifying an element with the grammatical function *subject* in the lexical entry of *grauen* 'to dread'. In addition, it would have to be stipulated

#### 10 Construction Grammar

that this subject can only be realized as an overt or covert expletive (The covert expletive would be syn *zero*). For the covert expletive, this means it has neither a form nor a meaning. Such expletive pronouns without phonological realization are usually frowned upon in Construction Grammar and analyses that can do without such abstract entities are to be preferred.

Kay & Fillmore (1999) represent the semantic contribution of signs as sets as well. This excludes the possibility of preventing the unwanted unification of linking constructions by referring to semantic constraints since we have the same effect as we have with valence sets: if the semantic descriptions are incompatible, the set is extended. This means that in an automatic unification computation all verbs are compatible with the Transitive Construction in (9a) and this would license analyses for (28) in addition to those of (27b).

	- b. \* Der the Mann man denkt thinks an at die the Frau woman das the Buch. book

An intransitive verb was unified with the Transitive Construction in the analysis of (28a) and in (28b) a verb that takes a prepositional object was combined with the Transitive Construction. This means that representations like (11) cannot be computed automatically as was intended by Kay (2002). Therefore one would have to specify subconstructions for all argument structure possibilities for every verb (active, passive, middle, …). This does not capture the fact that speakers can form passives after acquiring new verbs without having to learn about the fact that the newly learned verb forms one.

Michaelis & Ruppenhofer (2001) do not use sets for the representation of semantic information. Therefore they could use constraints regarding the meaning of verbs in the Transitive Construction. To this end, one needs to represent semantic relations with feature descriptions as it was done in Section 9.1.6. Adopting such a representation, it is possible to talk about two-place relations in an abstract way. See for instance the discussion of (29) on page 286. However, the unification with the Subject Construction cannot be blocked with reference to semantics since there exist so-called raising verbs that take a subject without assigning a semantic role to it. As is evidenced by subject verb agreement, *du* 'you' is the subject in (29), but the subject does not get a semantic role. The referent of *du* is not the one who *seems*.

(29) Du you scheinst seem.2sg gleich soon einzuschlafen. in.to.sleep 'You seem like you will fall asleep soon.'

This means that one is forced to either assume an empty expletive subject for verbs like *grauen* or to specify explicitly which verbs may inherit from the subject construction and which may not.

In addition to (29), there exist object raising constructions with accusative objects that can be promoted to subject in passives. The subject in the passive construction does not get a semantic role from the finite verb:

	- b. Richard Richard fischt fishes den the Teich pond leer. empty

The objects in (30) are semantic arguments of *an* 'towards' and *leer* 'empty', respectively, but not semantic arguments of the verbs *lacht* 'laughs' and *fischt* 'fishes', respectively. If one wants to explain these active forms and the corresponding passive forms via the linking constructions in (9), one cannot refer to semantic properties of the verb. Therefore, one is forced to postulate specific lexical entries for all possible verb forms in active and passive sentences.

# **10.6.2 Sign-Based Construction Grammar**

In more recent work by Fillmore, Kay, Michaelis and Sag, the Kay & Fillmore formalization of the description of valence using the Kay & Fillmore version of sets was abandoned in favor of the HPSG formalization (Kay 2005, Michaelis 2006, Sag 2012; Sag, Boas & Kay 2012: 10–11). Sign-Based Construction Grammar was developed from the Berkeley variant of CxG. Sign-Based Construction Grammar is a variant of HPSG (Sag 2010: 486) and as such uses the formal apparatus of HPSG (typed feature structures). Valence and saturation are treated in exactly the same way as in standard HPSG. Changes in valence are also analyzed as in HPSG using lexical rules (Sag, Boas & Kay 2012: Section 2.3). The analysis of long-distance dependencies was adopted from HPSG (or rather GPSG). Minimal Recursion Semantics (MRS; Copestake, Flickinger, Pollard & Sag 2005) is used for the description of semantic content. The only difference to works in standard HPSG is the organization of the features in feature structures. A new feature geometry was introduced to rule out constructions that describe daughters of daughters and therefore have a much larger locality domain in contrast to rules in phrase structure grammars, LFG, and GPSG. I do not view this new feature geometry as particularly sensible as it can be easily circumvented and serves to complicate the theory. This will be discussed in Section 10.6.2.1. Another change was the omission of valence features, which is discussed in Section 10.6.2.4.

### **10.6.2.1 Locality and mother**

Sag, Wasow & Bender (2003: 475–489) and Sag (2007, 2012) suggest using a mother feature in addition to daughter features. The Head-Complement Construction would then have the form in (31):

#### 10 Construction Grammar

(31) Head-Complement Construction following Sag, Wasow & Bender (2003: 481): *head-comp-cx* ⇒

 mother|syn|val|comps ⟨⟩ head-dtr 0 *word* syn|val|comps dtrs ⟨ 0 ⟩ ⊕ *nelist* 

The value of comps is then a list of the complements of a head (see Section 9.1.1). Unlike in standard HPSG, it is not *synsem* objects that are selected with valence lists, but rather signs. The analysis of the phrase *ate a pizza* takes the form in (32).<sup>13</sup>

The difference to HPSG in the version of Pollard & Sag (1994) is that for Sag, Wasow & Bender, signs do not have daughters and this makes the selection of daughters impossible. As a result, the synsem feature becomes superfluous (selection of the phon value and of the value of the newly introduced form feature is allowed in Sag, Wasow & Bender (2003) and Sag (2012)). The information about the linguistic objects that contribute to a complex sign is only represented in the very outside of the structure. The sign represented under mother is of the type *phrase* but does not contain any information about the daughters. The object described in (32) is of course also of another type than the phrasal or lexical signs that can occur as its daughters. We therefore need the following extension so that the grammar will work (Sag, Wasow & Bender 2003: 478):<sup>14</sup>

<sup>13</sup>SBCG uses a form feature in addition to the phon feature, which is used for phonological information as in earlier versions of HPSG (Sag 2012: Section 3.1, Section 3.6). The form feature is usually provided in example analyses.

<sup>14</sup>A less formal version of this constraint is given as the Sign Principle by Sag (2012: 105): "Every sign must be listemically or constructionally licensed, where: a sign is listemically licensed only if it satisfies some listeme, and a sign is constructionally licensed if it is the mother of some well-formed construct."

	- 1. there is a construction in , and
	- 2. there is a feature structure that is an instantiation of , such that Φ is the value of the mother feature of .

For comparison, a description is given in (34) with the feature geometry that was assumed in Section 9.1.1.

$$\begin{array}{|c|c|c|}\hline \text{head-complement-phrase} \\ \hline \text{PHON } \langle \text{ ate, a pizza} \rangle \\\\ \text{SYNSEM}|\text{LOC} \\\\ \text{(34)} \\\\ \text{(34)} \\\\ \text{HEAD-DTR} \\\\ \text{H}\text{EAD-DTR} \\\\ \text{NNNEM}|\text{LOC} \\\\ \text{NNNEM}|\text{LOC} \\\\ \text{(NONEM}|\text{LOC} \left(\text{CAT} \begin{array}{l} \text{HEAD} & \text{verb} \\ \text{SPEC} & \langle \text{ NP[} \{nom] \rangle \\\\ \text{CONS} & \langle \text{ESNP[}acc \rangle \rangle \\\\ \text{CONS} & \langle \text{ESNP[}acc \rangle \rangle \\\\ \text{ON} & ... \\\\ \end{array} \right) \\\\ \hline \end{array}$$

 In (34), the features head-dtr and non-head-dtrs belong to those features that phrases of type *head-complement-phrase* have. In (32), however, the phrase corresponds only to the value of the mother feature and therefore has no daughters represented in the sign itself. Using the feature geometry in (34), it is in principle possible to formulate restrictions on the daughters of the object in the non-head-dtrs list, which would be completely ruled out under the assumption of the feature geometry in (32) and the restriction in (33).

There are several arguments against this feature geometry, which will be discussed in the following subsections. The first one is an empirical one: there may be idioms that span clauses. The second argument concerns the status of the meta statement in (33) and the third one computational complexity.

10.6.2.1.1 Idioms that cross constituent boundaries

In Müller (2007a: Chapter 12) I conjectured that the locality restrictions may be too strong since there may be idioms that require one to make reference to daughters of daughters for their description. Richter & Sailer (2009) discuss the following idioms as examples:

(35) a. nicht not wissen, know wo where X\_Dat X der the Kopf head steht stands 'to not know where x's head is at'


In sentences containing the idioms in (35a–c), the X-constituent has to be a pronoun that refers to the subject of the matrix clause. If this is not the case, the sentences become ungrammatical or lose their idiomatic meaning.

(36) a. Ich I glaube, believe mich me.acc / # dich you.acc tritt kicks ein a Pferd. horse b. Jonas Jonas glaubt, believes ihn him tritt kicks ein a Pferd.<sup>15</sup> horse 'Jonas is utterly surprised.' c. # Jonas Jonas glaubt, believes dich you tritt kicks ein a Pferd. horse

'Jonas believes that a horse kicks you.'

In order to enforce this co-reference, a restriction has to be able to refer to both the subject of *glauben* 'believe' and the object of *treten* 'kick' at the same time. In SBCG, there is the possibility of referring to the subject since the relevant information is also available on maximal projections (the value of a special feature (xarg) is identical to the subject of a head). In (35a–c), we are dealing with accusative and dative objects. Instead of only making information about one argument accessible, one could represent the complete argument structure on the maximal projection (as is done in some versions of HPSG, see page 310 and pages 566–568). This would remove locality of selection, however, since if all heads project their argument structure, then it is possible to determine the properties of arguments of arguments by looking at the elements present in the argument structure. Thus, the argument structure of *wissen* 'to know' in (37) would contain the description of a *dass* clause.

(37) Peter Peter weiß, knows dass that Klaus Klaus kommt. comes 'Peter knows that Klaus is coming.'

Since the description of the *dass* clause contains the argument structure of *dass*, it is possible to access the argument of *dass*. *wissen* 'to know' can therefore access *Klaus kommt*. As such, *wissen* also has access to the argument structure of *kommt* 'to come',

<sup>15</sup>http://www.machandel-verlag.de/der-katzenschatz.html, 2015-07-06.

which is why *Klaus* is also accessible to *wissen*. However, the purpose of the new, more restrictive feature geometry was to rule out such nonlocal access to arguments.

An alternative to projecting the complete argument structure was suggested by Kay et al. (2015: Section 6): instead of assuming that the subject is the xarg in idiomatic constructions like those in (35), they assume that the accusative or dative argument is the xarg. This is an interesting proposal that could be used to fix the cases under discussion, but the question is whether it scales up if interaction with other phenomena are considered. For instance, Bender & Flickinger (1999) use xarg in their account of question tags in English. So, if English idioms can be found that require a non-subject xarg in embedded sentences while also admitting the idiom parts in the embedded sentence to occur as full clause with question tag, we would have conflicting demands and would have to assume different xargs for root and embedded clauses, which would make this version of the lexical theory rather unattractive, since we would need two lexical items for the respective verb.

(35d) is especially interesting, since here the X that refers to material outside the idiom is in an adjunct. If such cases existed, the xarg mechanism would be clearly insufficient since xarg is not projected from adjuncts. However, as Kay et al. (2015) point out the X does not necessarily have to be a pronoun that is coreferent with an element in a matrix clause. They provide the following example:

(38) Justin Bieber—Once upon a time ∅ butter wouldn't melt in little Justin's mouth. Now internationally famous for being a weapons-grade petulant brat …

So, whether examples of the respective kind can be found is an open question.

Returning to our *horse* examples, Richter & Sailer (2009: 313) argue that the idiomatic reading is only available if the accusative pronouns is fronted and the embedded clause is V2. The examples in (39) do not have the idiomatic reading:

	- I believe that me a horse kicks
	- 'I believe that a horse kicks me.'
	- b. Ich glaube, ein Pferd tritt mich.
		- I believe a horse kicks me
		- 'I believe that a horse kicks me.'

Richter & Sailer assume a structure for *X\_Acc tritt ein Pferd* in (35b) that contains, among others, the constraints in (40).

The feature geometry in (40) differs somewhat from what was presented in Chapter 9 but that is not of interest here. It is only of importance that the semantic contribution of the entire phrase is *surprised*′ (x <sup>2</sup> ). The following is said about the internal structure of the phrase: it consists of a filler-daughter (an extracted element) and also of a head daughter corresponding to a sentence from which something has been extracted. The head daughter means 'a horse kicks x <sup>2</sup> ' and has an internal head somewhere whose argument structure list contains an indefinite NP with the word *Pferd* 'horse' as its head. The second element in the argument structure is a pronominal NP in the accusative

whose local value is identical to that of the filler ( 1 ). The entire meaning of this part of the sentence is *surprised*′ (x <sup>2</sup> ), whereby 2 is identical to the referential index of the pronoun. In addition to the constraints in (40), there are additional ones that ensure that the partial clause appears with the relevant form of *glauben* 'to believe' or *denken* 'to think'. The exact details are not that important here. What is important is that one can specify constraints on complex syntactic elements, that is, it must be possible to refer to daughters of daughters. This is possible with the classical HPSG feature geometry, but not with the feature geometry of SBCG. For a more general discussion of locality, see Section 18.2.

The restrictions on *Pferd* clauses in (40) are too strict, however, since there are variants of the idiom that do not have the accusative pronoun in the *Vorfeld*:

(41) a. ich I glaub believe es expl tritt kicks mich me ein a Pferd horse wenn when ich I einen a derartigen such Unsinn nonsense lese.<sup>16</sup> read

'I am utterly surprised when I read such nonsense.'

b. omg omg dieser this xBluuR XBluuR der he nn ist is wieder again da there ey nein no ich I glaub believe es expl tritt kicks mich ein Pferd!!<sup>17</sup>

me a horse

'OMG, this xBluuR, the nn, he is here again, no, I am utterly surprised.'

<sup>16</sup>http://www.welt.de/wirtschaft/article116297208/Die-verlogene-Kritik-an-den-Steuerparadiesen.html, commentary section, 2018-02-20.

<sup>17</sup>http://forum.gta-life.de/index.php?user/3501-malcolm/, 10.12.2015.

c. ich I glaub believe jetzt now tritt kicks mich me ein a pferd<sup>18</sup> horse 'I am utterly surprised now.'

In (42a–b) the *Vorfeld* is filled by an expletive and in (42c) an adverb fills the *Vorfeld* position. While these forms of the idiom are really rare, they do exist and should be allowed for by the description of the idiom. So, one would have to make sure that *ein Pferd* 'a horse' is not fronted, but this can be done in the lexical item of *tritt* 'kicks'. This shows that the cases at hand cannot be used to argue for models that allow for the representation of (underspecified) trees of depth greater one, but I still believe that such idioms can be found. Of course this is an open empirical question.

What is not an open empirical question though is whether humans store chunks with complex internal structure or not. It is clear that we do and much Construction Grammar literature emphasizes this. Constructional HPSG can represent such chunks, but SBCG cannot since linguistic signs do not have daughters. So here Constructional HPSG and TAG are the theories that can represent complex chunks of linguistic material with its internal structure, while other theories like GB, Minimalism, CG, LFG and DG can not.

### 10.6.2.1.2 Complicated licensing of constructions

In addition to these empirical problems, there is a conceptual problem with (33): (33) is not part of the formalism of typed feature structures but rather a meta-statement. Therefore, grammars which use (33) cannot be described with the normal formalism. The formalization given in Richter (2004) cannot be directly applied to SBCG, which means that the formal foundations of SBCG still have to be worked out.<sup>19</sup> Furthermore, the original problem that (33) was designed to solve is not solved by introducing the new feature geometry and the meta statement. Instead, the problem is just moved to another level since we now need a theory about what is a permissible meta-statement and what is not. As such, a grammarian could add a further clause to the meta statement stating that Φ is only a well-formed structure if it is true of the daughters of a relevant construction C that they are the mother value of a construction C′ . It would be possible to formulate constraints in the meta-statement about the construction C′ or individual values inside the corresponding feature structures. In this way, locality would have been abandoned since it is possible to refer to daughters of daughters. By assuming (33), the theoretical inventory has been increased without any explanatory gain.

### 10.6.2.1.3 Computational complexity

One motivation behind restrictions on locality was to reduce the computational complexity of the formalism (Ivan Sag, p. c. 2011, See Chapter 17 on computational complexity and

<sup>18</sup>http://www.castingshow-news.de/menowin-frhlich-soll-er-zum-islam-konvertieren-7228/, 2018-02-20.

<sup>19</sup>A note of caution is necessary since there were misunderstandings in the past regarding the degree of formalizations of SBCG: in comparison to most other theories discussed in this book, SBCG is well-formalized. For instance it is easy to come up with a computer implementation of SBCG fragments. I implemented one in the TRALE system myself. The reader is referred to Richter (2004) to get an idea what kind of deeper formalization is talked about here.

#### 10 Construction Grammar

generative power). However, the locality restrictions of SBCG can be circumvented easily by structure sharing (Müller 2013a: Section 9.6.1). To see this consider a construction with the following form:<sup>20</sup>


The feature nasty in the mother sign refers to the value of dtrs and hence all the internal structure of the sign that is licensed by the constructional schema in (42) is available. Of course one could rule out such things by stipulation – if one considered it to be empirically adequate, but then one could as well continue to use the feature geometry of Constructional HPSG (Sag 1997) and stipulate constraints like "Do not look into the daughters." An example of such a constraint given in prose is the Locality Principle of Pollard & Sag (1987: 143–144).

## **10.6.2.2 Selection of phon and form values**

The feature geometry of constructional HPSG has the phon value outside of synsem. Therefore verbs can select for syntactic and semantic properties of their arguments but not for their phonology. For example, they can require that an object has accusative case but not that it starts with a vowel. SBCG allows for the selection of phonological information (the feature is called form here) and one example of such a selection is the indefinite article in English, which has to be either *a* or *an* depending on whether the noun or nominal projection it is combined with starts with a vowel or not (Flickinger, Mail to the HPSG mailing list, 01.03.2016):

	- b. a house

The distinction can be modeled by assuming a selection feature for determiners.<sup>21</sup> An alternative would be of course to capture all phonological phenomena by formulating constraints on phonology on the phrasal level (see Bird & Klein 1994 and Walther 1999 for phonology in HPSG).

<sup>20</sup>Bob Borsley (p.c. 2019) pointed out to me that a feature like nasty in (42) is actually used in SBCG: the xarg feature makes available one argument (usually the subject) at a complete projection even though the respective element may be contained within it (Sag 2012: 84). xarg can be used to refer to a daughter within a complex structure for example to establish agreement in question tag formation (Bender & Flickinger 1999).

<sup>21</sup>In Standard HPSG there is mutual selection between the determiner and the noun. The noun selects the determiner via spr and the determiner selects the noun via a feature called specified. This feature is similar to the mod feature, which was explained in Section 9.1.7.

Note also that the treatment of raising in SBCG admits nonlocal selection of phonology values, since the analysis of raising in SBCG assumes that the element on the valence list of the embedded verb is identical to an element in the arg-st list of the matrix verb (Sag 2012: 159). Hence, both verbs in (44) can see the phonology of the subject:

### (44) Kim can eat apples.

In principle there could be languages in which the form of the downstairs verb depends on the presence of an initial consonant in phonology of the subject. English allows for long chains of raising verbs and one could imagine languages in which all the verbs on the way are sensitive to the phonology of the subject. Such languages probably do not exist.

Now, is this a problem? Not for me, but if one develops a general setup in a way to exclude everything that is not attested in the languages of the world (as for instance the selection of arguments of arguments of arguments), then it is a problem that heads can see the phonology of elements that are far away.

There are two possible conclusions for practitioners of SBCG: either the mother feature could be given up since one agrees that theories that do not make wrong predictions are sufficiently constrained and one does not have to explicitly state what cannot occur in languages or one would have to react to the problem with nonlocally selected phonology values and therefore assume a synsem or local feature that bundles information that is relevant in raising and does not include the phonology. This supports the argument I made on mother in the previous subsection.

#### **10.6.2.3 The local feature and information shared in nonlocal dependencies**

Similarly, elements of the arg-st list contain information about form. In nonlocal dependencies, this information is shared in the gap list (slash list in other versions of HPSG) and is available all the way to the filler. In other versions of HPSG only local information is shared and elements in valence lists do not have a phon feature. If the sign that is contained in the gap list were identified with the filler, the information about phonological properties of the filler would be available at the extraction side and SBCG could be used to model languages in which the phonology of a filler is relevant for a head from which it is extracted. So for instance, *likes* could see the phonology of *bagels* in (45):

(45) Bagels, I think that Peter likes.

It would be possible to state constraints saying that the filler has to contain a vowel or two vowels or that it ends with a consonant. In addition, all elements on the extraction path (*that* and *think*) can see the phonology of the filler as well. While there are languages that mark the extraction path, I doubt that there are languages that have phonological effects across long distances. This problem can be and has been solved by assuming that the filler is not shared with the information in the gap list, but that parts of the filler are shared with parts in the gap list: Sag (2012: 166) assumes that syn, sem and store information are identified individually. Originally, the feature geometry of

#### 10 Construction Grammar

HPSG was motivated by the wish to structure-share information. Everything within local was shared between filler and extraction side. This kind of motivation is given up in SBCG.

Note also that not sharing the complete filler with the gap means that the form value of the element in the arg-st list at the extraction side is not constrained. Without any constraint the theory would be compatible with infinitely many models, since the form value could be anything. For example, the form value of an extracted adjective could be ⟨ *Donald Duck* ⟩ or ⟨ *Dunald Dock* ⟩ or any arbitrary chaotic sequence of letters/ phonemes. To exclude this, one can stipulate the form values of extracted elements to be the empty list, but this leaves one with the unintuitive situation that the element in gap has an empty form list while the corresponding filler has a different, filled one.

### **10.6.2.4 The valence list**

Another change from Constructional HPSG to SBCG involves the use of a single valence feature rather then three features spr, subj and comps that were suggested by Borsley (1987) to solve problems in earlier HPSG versions that used a single valence feature (subcat). Borseley's suggestion was taken up by Pollard & Sag (1994: Chapter 9) and has been used in some version or other in other HPSG versions since then.

Sag (2012: 85) assumes that VPs are described by the following feature description:

$$\text{(46)} \quad \begin{bmatrix} \text{syn} & \begin{bmatrix} \text{vAL} \ \langle \text{NP} \ \rangle \end{bmatrix} \end{bmatrix}$$

The problem with such an approach is that VPs differ from other phrasal projections in having an element on their valence list. APs, NPs, and (some) PPs have an empty valence list. In other versions of HPSG the complements are represented on the comps list and generalizations about phrases with fully saturated comps lists can be expressed directly. One such generalization is that projections with an empty comps list (NPs, PPs, VPs, adverbs, CPs) can be extraposed in German (Müller 1999b: Section 13.1.2).

Note also that reducing the number of valence features does not necessarily reduce the number of constructions in the grammar. While classical HPSG has a Specifier-Head Schema and a Head-Complement Schema, SBCG has two Head-Complement Constructions: one with a mother with a singleton comps list and one with a mother with an empty comps list (Sag 2012: 152). That two seperate constructions are needed is due to the assumption of flat structures.

### **10.6.2.5 The head feature and the head feature principle**

While Sag et al. (2003) still assume a head feature, Sag (2012) eliminated it. Rather than having one place in a feature structure where all information is stated that is always shared between a lexical head and its projection, the features that were represented under head are now simply represented under cat. Sag (2012) got rid of the Head Feature Principle (see p. 280) and stated identity of information explicitly within constructions. Structure sharing is not stated with boxed numbers anymore but with capital letters instead. An exclamation mark can be used to specify information that is not shared (Sag 2012: 125). While the use of letters instead of numbers is just a presentational variant, the exclamation mark is a non-trivial extension. (47) provides an example: the constraints on the type *pred-hd-comp-cxt*:

> 

(47) Predicational Head-Complement Construction following Sag (2012: 152):

$$\begin{aligned} & \textbf{pred-hd-comp-cxt} \Rightarrow \\ & \begin{bmatrix} \textbf{NOTER} | \textbf{SYN} \ \textbf{X} \ \textbf{I} \ \textbf{[val \{\,\,\}]} \end{bmatrix} \\ & \textbf{HEAD-DTR Z:} \begin{bmatrix} \textbf{word} \\ \textbf{SYN} \ \textbf{X:} \ \textbf{[val \{\,\,\} \oplus\,\mathbf{L}]} \end{bmatrix} \\ & \textbf{prrs \{\,\,\mathbf{Z}\}} \oplus \textbf{L:} \textit{relist} \end{aligned} $$

The X stands for all syntactic properties of the head daughter. These are identified with the value of syn of the mother with the exception of the val value, which is specified to be a list with the element Y. It is interesting to note that the !-notation is not without problems: Sag (2012: 145) states that the version of SBCG that he presents is "purely monotonic (non-default)", but if the syn value of the mother is not identical because of overwriting of val, it is unclear how the type of syn can be constrained. ! can be understood as explicitly sharing all features that are not mentioned after the !. Note though, that the type has to be shared as well. This is not trivial since structure sharing cannot be applied here since structure sharing the type would also identify all features belonging to the respective value. So one would need a relation that singles out a type of a structure and identifies this type with the value of another structure. Note also that information from features behind the ! can make the type of the complete structure more specific. Does this affect the shared structure (e.g., head-dtr|syn in (47))? What if the type of the complete structure is incompatible with the features in this structure? What seems to be a harmless notational device in fact involves some non-trivial machinery in the background. Keeping the Head Feature Principle makes this additional machinery unnecessary.

### **10.6.2.6 Conclusion**

Due to the conceptual problems with meta-statements and the relatively simple ways of getting around locality restrictions, the reorganization of features (mother vs. synsem) does not bring with it any advantages. Since the grammar becomes more complex due to the meta-constraint, we should reject this change.<sup>22</sup> Other changes in the fea-

<sup>22</sup>In Müller (2013a: 253) I claimed that SBCG uses a higher number of features in comparison to other variants of HPSG because of the assumption of the mother feature. As Van Eynde (2015) points out this is not true for more recent variants of HPSG since they have the synsem feature, which is not needed if mother is assumed. (Van Eynde refers to the local feature, but the local feature was eliminated because it was considered superfluous because of the lexical analysis of extraction. If one simply omits the mother feature from SBCG one is back to the 1987 version of HPSG (Pollard & Sag 1987), which also used a syn and a sem feature. What would be missing would be the locality of selection (Sag 2012: 149) that was enforced to some extent by the synsem feature. Note that the locality of selection that is enforced by synsem can be circumvented by the use of relational constraints as well (see Frank Richter and Manfred Sailer's work on collocations (Richter & Sailer 1999a, Soehn & Sailer 2008)). So in principle, we end up with style guides in this area of grammar as well.

ture geometry (elimination of the local feature and use of a single valence feature) are problematic as well. However, if we do reject the revised feature geometry and revert to the feature geometry that was used before, then Sign-Based Construction Grammar and Constructional HPSG (Sag 1997) are (almost) indistinguishable.

# **10.6.3 Embodied Construction Grammar**

Embodied Construction Grammar was developed by Bergen & Chang (2005) and there are some implementations of fragments of German that use this format (Porzel et al. 2006). In the following, I will briefly present the formalism using an example construction. (48) gives the DetNoun construction:<sup>23</sup>


This representational form is reminiscent of PATR-II grammars (Shieber, Uszkoreit, Pereira, Robinson & Tyson 1983): as in PATR-II, the daughters of a construction are given names. As such, (48) contains the daughters c and d. d is a determiner and c is a common noun. It is possible to refer to the construction itself via the object **self**. Constructions (and also their daughters) are feature-value descriptions. Structure sharing is represented by path equations. For example, d.gender ↔ c.gender states that the value of the gender feature of the determiner is identical to the gender feature of the noun. As well as restrictions on features, there are restrictions on the form. d.f **before** c.f states that the form contribution of the determiner must occur before that of the noun. Bergen & Chang (2005) differentiate between immediate (**meets**) and non-immediate precedence (**before**). Part of the information represented under f is the orthographic form (f.orth). The inheritance relation is given explicitly in the construction as in Kay & Fillmore (1999).

The construction in (48) can be represented in a similar way to the format used in Chapter 6: (49) shows how this is done. The structure in (49) corresponds to a construc-

<sup>23</sup>For a similar construction, see Bergen & Chang (2005: 162).

tion where the determiner directly precedes the noun because the form contribution of the determiner has been combined with that of the noun. This strict adjacency constraint makes sense as the claim that the determiner must precede the noun is not restrictive enough since sequences such as (50b) would be allowed:

(50) a. [dass] that die the Frauen women Türen doors öffnen open 'that the woman open doors' b. \* die Türen öffnen Frauen

If discontinuous phrases are permitted, *die Türen* 'the doors' can be analyzed with the DetNoun Construction although another noun phrase intervenes between the determiner and the noun (Müller 1999b: 424; 1999d). The order in (50b) can be ruled out by linearization constraints or constraints on the continuity of arguments. If we want the construction to require that the determiner and noun be adjacent, then we would simply use **meets** instead of **before** in the specification of the construction.

This discussion has shown that (49) is more restrictive than (48). There are, however, contexts in which one could imagine using discontinuous constituents such as the deviant one in (50b). For example, discontinuous constituents have been proposed for verbal complexes, particle verbs and certain coordination data (Wells 1947). Examples for analyses with discontinuous constituents in the framework of HPSG are Reape (1994), Kathol (1995), Kathol (2000), Crysmann (2008), and Beavers & Sag (2004).<sup>24</sup> These analyses, which are discussed in Section 11.7.2.2 in more detail, differ from those previously presented in that they use a feature domain instead of or in addition to the daughters features. The value of the domain feature is a list containing the head and the elements dependent on it. The elements do not have to necessarily be adjacent in the utterance, that is, discontinuous constituents are permitted. Which elements are entered into this

<sup>24</sup>Crysmann, Beaver and Sag deal with coordination phenomena. For an analysis of coordination in TAG that also makes use of discontinuous constituents, see Sarkar & Joshi (1996) and Section 21.6.2.

list in which way is governed by the constraints that are part of the linguistic theory. This differs from the simple **before** statement in ECG in that it is much more flexible and in that one can also restrict the area in which a given element can be ordered since elements can be freely ordered inside their domain only.

There is a further difference between the representation in (48) and the general HPSG schemata: in the ECG variant, linearization requirements are linked to constructions. In HPSG and GPSG, it is assumed that linearization rules hold generally, that is, if we were to assume the rules in (51), we would not have to state for each rule explicitly that shorter NPs tend to precede longer ones and that animate nouns tend to occur before inanimate ones.

(51) a. S → NP[nom], NP[acc], V b. S → NP[nom], NP[dat], V c. S → NP[nom], NP[dat], NP[acc], V d. S → NP[nom], NP[acc], PP, V

It is possible to capture these generalizations in ECG if one specifies linearization constraints for more general constructions and more specific constructions inherit them from these. As an example, consider the Active-Ditransitive Construction discussed by Bergen & Chang (2005: 170):


These restrictions allow the sentences in (53a,b) and rule out those in (53c):

	- b. Mary happily tossed me a drink.
	- c. \* Mary tossed happily me a drink.

The restriction agent.f **before** action.f forces an order where the subject occurs before the verb but also allows for adverbs to occur between the subject and the verb. The other constraints on form determine the order of the verb and its object: the recipient must be adjacent to the verb and the theme must be adjacent to the recipient. The requirement that an agent in the active must occur before the verb is not specific to ditransitive constructions. This restriction could therefore be factored out as follows:

(54) **Construction** Active-Agent-Verb **subcase of** Pred-Expr **constructional** agent:Ref-Expr action:Verb **form** agent.f **before** action.f

The Active-Ditransitive Construction in (52) would then inherit the relevant information from (54).

In addition to the descriptive means used in (48), there is the evokes operator (Bergen & Chang 2005: 151–152). An interesting example is the representation of the term hypotenuse: this concept can only be explained by making reference to a right-angled triangle (Langacker 1987: Chapter 5). Chang (2008: 67) gives the following formalization:

(55) **Schema** hypotenuse **subcase of** line-segment **evokes** right-triangle **as** rt **constraints self** ↔ rt.long-side

This states that a hypotenuse is a particular line segment, namely the longest side of a right-angled triangle. The concept of a right-angled triangle is activated by means of the evokes operator. Evokes creates an instance of an object of a certain type (in the example, *rt* of type *right-triangle*). It is then possible to refer to the properties of this object in a schema or in a construction.

The feature description in (56) is provided in the notation from Chapter 6. It is the equivalent to (55).

$$\begin{array}{ll} \text{(56)} \quad \square \begin{bmatrix} \text{hypotenuse} \\ \text{^{EvOKES}} \left\langle \begin{bmatrix} \text{right-triangle} \\ \text{^{LONG-SDE}} & \square \end{bmatrix} \right\rangle \end{array}$$

 The type *hypotenuse* is a subtype of *line-segment*. The value of evokes is a list since a schema or construction can evoke more than one concept. The only element in this list in (56) is an object of type *right-triangle*. The value of the feature long-side is identified with the entire structure. This essentially means the following: I, as a hypotenuse, am the long side of a right-angled triangle.

Before turning to FCG in the next subsection, we can conclude that ECG and HPSG are notational variants.

# **10.6.4 Fluid Construction Grammar**

Van Trijp (2013, 2014) claims that SBCG and HPSG are fundamentally different from Fluid Construction Grammar (FCG).<sup>25</sup> He claims that the former approaches are generative ones while the latter is a cognitive-functional one. I think that it is not legitimate to draw these distinctions on the basis of what is done in FCG.<sup>26</sup> I will comment on this at various places in this section. I first deal with the representations that are used in FCG, talk about argument structure constructions, the combination operations fusion and merging that are used in FCG and then provide a detailed comparison of FCG and SBCG/HPSG.

## **10.6.4.1 General remarks on the representational format**

Fluid Construction Grammar (FCG, Steels 2011) is similar to HPSG in that it uses attribute value matrices to represent linguistic objects. However, these AVMs are untyped as in LFG. Since there are no types, there are no inheritance hierarchies that can be used to capture generalizations, but one can use macros to reach similar effects. Constructions can refer to more general constructions (van Trijp 2013: 105). Every AVM comes with a name and can be depicted as follows:

(57) *unit-name* feature<sup>1</sup> *value*<sup>1</sup> … feature *value* 

Linguistic objects have a form and a meaning pole. The two poles could be organized into a single feature description by using a syn and a sem feature, but in FCG papers the two poles are presented separately and connected via a double arrow. (58) is an example:

(58) The name *Kim* according to van Trijp (2013: 99):

<sup>25</sup>A reply to van Trijp based on the discussion in this section is published as Müller (2017).

<sup>26</sup>Steels (2013: 153) emphasizes the point that FCG is a technical tool for implementing constructionist ideas rather than a theoretical framework of its own. However, authors working with the FCG system publish linguistic papers that share a certain formal background and certain linguistic assumptions. So this section addresses some of the key assumptions made and some of the mechanisms used.

Depending on the mode in which the lexical items are used, the syntactic pole or the semantic pole is used first. The first processing step is a matching phase in which it is checked whether the semantic pole (for generation) or the syntactic pole (for parsing) matches the structure that was build so far. After this test for unification, the actual unification, which is called merging, is carried out. After this step, the respective other pole (syntax for generation and semantics for parsing) is merged. This is illustrated in Figure 10.2.

Figure 10.2: Generation and parsing in FCG (van Trijp 2013: 99)

### **10.6.4.2 Argument Structure Constructions**

Fluid Construction Grammar assumes a phrasal approach to argument structure, that is, it is assumed that lexical items enter into phrasal configurations that contribute independent meaning (van Trijp 2011). The FCG approach is one version of implementing Goldberg's plugging approach to argument structure constructions (Goldberg 1995). Van Trijp suggests that every lexical item comes with a representation of potential argument roles like Agent, Patient, Recipient, and Goal. Phrasal argument structure constructions are combined with the respective lexical items and realize a subset of the argument roles, that is they assign them to grammatical functions. Figure 10.3 shows an example: the verb *sent* has the semantic roles Agent, Patient, Recipient, and Goal (upper left of the figure). Depending on the argument structure construction that is chosen, a subset of these roles is selected for realization.<sup>27</sup> The figures show the relation between sender,

<sup>27</sup>It is interesting to note here that van Trijp (2011: 141) actually suggests a lexical account since every lexical item is connected to various phrasal constructions via coapplication links. So every such pair of a lexical item and a phrasal construction corresponds to a lexical item in Lexicalized Tree Adjoining Grammar (LTAG). See also Müller & Wechsler (2014a: 25) on Goldberg's assumption that every lexical item is associated with phrasal constructions.

Note that such coapplication links are needed since without them the approach cannot account for cases in which two or more argument roles can only be realized together but not in isolation or in any other combination with other listed roles. 347

sent, and sendee and more the more abstract semantic roles and the relation between these roles and grammatical functions for the sentences in (59):

	- b. He sent the letter.
	- c. The letter was sent to her.

While in (59a) the agent, the patient and the recipient are mapped to grammatical functions, only the agent and the patient are mapped to grammatical functions in (59b). The recipient is left out. (59c) shows an argument realization in which the sendee is realized as a *to* phrase. According to van Trijp this semantic role is not a recipient but a goal.

Figure 10.3: Lexical items and phrasal constructions. Figure from van Trijp (2011: 122)

Note that under such an approach, it is necessary to have a passive variant of every active construction. For languages that allow for the combination of passive and impersonal constructions, one would be forced to assume a transitive-passive-impersonal construction. As was argued in Müller (2006: Section 2.6) free datives (commodi/incommodi) in German can be added to almost any construction. They interact with the dative passive and hence should be treated as arguments. So, for the resultative construction one would need an active variant, a passive variant, a variant with dative argument, a variant with dative argument and dative passive, and a middle variant. While it is technically possible to list all these patterns and it is imaginable that we store all this information in our brains, the question is whether such listings really reflect our linguistic knowledge. If a new construction comes into existence, lets say an active sentence pattern with a nominative and two datives in German, wouldn't we expect that this pattern can be used in the passive? While proposals that establish relations between active and

passive constructions would predict this, alternative proposals that just list the attested possibilities do not.

The issue of how such generalizations should be captured was discussed in connection with the organization of the lexicon in HPSG (Flickinger 1987, Meurers 2001). In the lexical world, one could simply categorize all verbs according to their valence and say that *loves* is a transitive verb and the passive variant *loved* is an intransitive verb. Similarly *gives* would be categorized as a ditransitive verb and *given* as a two-place verb. Obviously this misses the point that *loved* and *given* share something: they both are related to their active form in a systematic way. This kind of generalization is captured by lexical rules that relate two lexical items. The respective generalizations that are captured by lexical rules are called a horizontal generalizations as compared to vertical generalizations, which describe relations between subtypes and supertypes in an inheritance hierarchy.

The issue is independent of the lexical organization of knowledge, it can be applied to phrasal representations as well. Phrasal constructions can be organized in hierarchies (vertical), but the relation between certain variants is not covered by this. The analog to the lexical rules in a lexical approach are GPSG-like metarules in a phrasal approach. So what seems to be missing in FCG is something that relates phrasal patterns, e.g., allostructions (Cappelle 2006; Goldberg 2014: 116, see also footnote 11).

#### **10.6.4.3 Fusion, matching and merging**

As was pointed out by Dowty (1989: 89–90), checking for semantic compatibility is not sufficient when deciding whether a verb may enter (or be fused with) a certain construction. The example is the contrast between *dine*, *eat*, and *devour*. While the thing that is eaten may not be realized with *dine*, its realization is optional with *eat* and obligatory with *devour*. So the lexical items have to come with some information about this.

Van Trijp (2011) and Steels & van Trijp (2011) make an interesting suggestion that could help here: every verb comes with a list of potential roles and argument structure constructions can pick subsets of these roles (see Figure 10.3). This is called *matching*: introducing new argument roles is not allowed. This would make it possible to account for *dine*: one could say that there is something that is eaten, but that no Theme role is made available for linking to the grammatical functions. This would be a misuse of thematic roles for syntactic purposes though since *dine* is semantically a two-place predicate. To account for the extension of argument roles as it is observed in the Caused-Motion Construction (Goldberg 1995: Chapter 7), Steels & van Trijp (2011) suggest a process called *merging*. Merging is seen as a repair strategy: if an utterance involves an intransitive verb and some other material, the utterance cannot be processed with matching alone. For example, when processing Goldberg's example in (60), *he sneezed* could be parsed, but *the foam* and *off the cappuccino* would be unintegrated (see Chapter 21 for an extended discussion of such constructions).

(60) He sneezed the foam off the cappuccino.<sup>28</sup>

<sup>28</sup>Goldberg (2006: 42).

#### 10 Construction Grammar

So, Steels & van Trijp (2011: 319–320) suggest that only if regular constructions cannot apply, merging is allowed. The problem with this is that human language is highly ambiguous and in the case at hand this could result in situations in which there is a reading for an utterance, so that the repair strategy would never kick in. Consider (61):<sup>29</sup>

(61) Schlag beat den the Mann man tot! dead 'Beat the man to death!' or 'Beat the dead man!'

(61) has two readings: the resultative reading in which *tot* 'dead' expresses the result of the beating and another reading in which *tot* is a depictive predicate. The second reading is dispreferred, since the activity of beating dead people is uncommon, but the structure is parallel to other sentences with depictive predicates:

(62) Iss eat den the Fisch fish roh! raw

The depictive reading can be forced by coordinating *tot* with a predicate that is not a plausible result predicate:

(63) Schlag beat ihn him tot dead oder or lebendig! alive 'Beat him when he is dead or while he is alive!'

So, the problem is that (61) has a reading which does not require the invocation of the repair mechanism: *schlug* 'beat' is used with the transitive construction and *tot* is an adjunct (see Winkler 1997). However, the more likely analysis of (61) is the one with the resultative analysis, in which the valence frame is extended by an oblique element. So this means that one has to allow the application of merging independent of other analyses that might be possible. As Steels & van Trijp (2011: 320) note, if merging is allowed to apply freely, utterances like (64a) will be allowed and of course (64b) as well.

	- b. \* She dined a steak.

In (64) *sneeze* and *dined* are used in the transitive construction.

The way out of this dilemma is to establish information in lexical items that specifies in which syntactic environments a verb can be used. This information can be weighted and for instance the probability of *dine* to be used transitively would be extremely low. Steels and van Trijp would connect their lexical items to phrasal constructions via socalled coapplication links and the strength of the respective link would be very low for

(i) They cooked the chicken dry.

<sup>29</sup>I apologize for these examples …. An English example that shows that there may be ambiguity between the depictive and the resultative construction is the following one that is due to Haider (2016):

I use the German example below since the resultative reading is strongly preferred over the depictive one.

*dine* and the transitive construction and reasonably high for *sneeze* and the Caused-Motion Construction. This would explain the phenomena (and in a usage-based way), but it would be a lexical approach, as it is common in CG, HPSG, SBCG, and DG.

### **10.6.4.4 Long-distance dependencies**

Van Trijp (2014) compares the slash-based approaches that are used in GPSG, HPSG, and SBCG with the approach that he suggests within the framework of FCG. He claims that there are fundamental differences between SBCG and FCG and assigns SBCG to the class of generative grammars, while placing FCG in the class of cognitive-functional approaches. He claims that his cognitive-functional approach is superior in terms of completeness, explanatory adequacy, and theoretical parsimony (p. 2). What van Trijp (2014) suggests is basically an analysis that was suggested by Reape (2000) in unpublished work (see Reape (1994) for a published version of an linearization-based approach and Kathol (2000), Müller (1996c, 1999b, 2002a) for linearization-based approaches that despite of being linearization-based assume the slash approach for nonlocal dependencies). Van Trijp develops a model of grammar that allows for discontinuous constituents and just treats the serialization of the object in sentences like (65) as an alternative linearization option.

	- b. What did the boy hit?

Van Trijp's analysis involves several units that do not normally exist in phrase structure grammars, but can be modeled via adjacency constraints or which represent relations between items which are part of lexical representations in HPSG/SBCG anyway. An example is the subject-verb anchor that connects the subject and the verb to represent the fact that these two items play an important functional role. Figure 10.4 shows the analysis of (66).

(66) What did the boy hit?

As can be seen in the figure, van Trijp also refers to information structural terms like topic and focus. It should be noted here that the analysis of information structure has quite some history in the framework of HPSG (Engdahl & Vallduví 1996, Kuhn 1995, 1996, Günther et al. 1999, Wilcock 2001, 2005, De Kuthy 2002, Paggio 2005, Bildhauer 2008, Bildhauer & Cook 2010). The fact that information structure is not talked about in syntax papers like Sag (2012) does not entail that information structure is ignored or should be ignored in theories like HPSG and SBCG. So much for completeness. The same holds of course for explanatory adequacy. This leaves us with theoretical parsimony, but before I comment on this, I want to discuss van Trijp's analysis in a little bit more detail in order to show that many of his claims are empirically problematic and that his theory therefore cannot be explanatory since empirical correctness is a precondition for explanatory adequacy.

Figure 10.4: The analysis of *What did the boy hit?* according to van Trijp (2014: 265)

Van Trijp claims that sentences with nonlocal dependency constructions in English start with a topic.<sup>30</sup> Bresnan's sentences in (2) and (3) were discussed on page 226 (Bresnan 2001: 97) and are repeated below for convenience:


These sentences show that the pre-subject position is not unambiguously a topic or a focus position. So, a statement saying that the fronted element is a topic is empirically not correct. If this position is to be associated with an information structural function, this association has to be a disjunction admitting both topics and focused constituents.

A further problematic aspect of van Trijp's analysis is that he assumes that the auxiliary *do* is an object marker (p. 10, 22) or a non-subject marker (p. 23). It is true that *do* support is not necessary in subject questions like (69a), but only in (69b), but this does not imply that all items that are followed by *do* are objects.

	- b. Who did John see?

First, *do* can be used to emphasize the verb:

(70) Who *did* see the man?

Second, all types of other grammatical functions can precede the verb:

<sup>30</sup>Van Trijp (2014: 256) uses the following definitions for topic and focus: "Topicality is defined in terms of aboutness: the topic of an utterance is what the utterance is 'about'. Focality is defined in terms of salience: focus is used for highlighting the most important information given the current communicative setting."


And finally, even a subject can appear in front of *do* if it is extracted from another clause:


There is a further empirical problem: approaches that assume that a filler is related to its origin can explain scope ambiguities that only arise when an element is extracted. Compare for instance the sentence in (73a) with the sentences in (73b, c): although the order of *oft* 'often' and *nicht* 'not' in (73a) and (73c) is the same, (73a) is ambiguous but (73c) is not.

	- that he the book not often reads 'that it is not the case that he reads the book often'
	- c. dass that er he das the Buch book oft often nicht not liest reads 'that it is often that he does not read the book'

(73a) has the two readings that correspond to (73b) and (73c). A purely linearizationbased approach probably has difficulties to explain this. A slash-based approach can assume that (73a) has a gap (or some similar means for the introduction of nonlocal dependencies) at the position of *oft* in (73b) or (73c). The gap information is taken into account in the semantic composition at the site of the gap. This automatically accounts for the observed readings.

Another empirical problem that has to be solved is the existence of extraction path marking languages. Bouma, Malouf & Sag (2001) list a number of languages in which elements vary depending on the existence or absence of a gap in a constituent they attach to. For instance, Irish has complementizers that have one form if the clause they attach to has an element extracted and another form if it does not. slash-based proposals can account for this in a straight-forward way: the fact that a constituent is missing in a phrase is represented in the slash value of the trace and this information is percolated up the tree. So even complex structures contain the information that there is a constituent missing inside them. Complementizers that are combined with sentences therefore can select sentences with slash values that correspond to the form of the complementizer. Van Trijp's answer to this challenge is that all languages are different (van Trijp 2014:

#### 10 Construction Grammar

263) and that the evidence from one language does not necessarily mean that the analysis for that language is also appropriate for another language. While I agree with this view in principle (see Section 13.1), I do think that extraction is a rather fundamental property of languages and that nonlocal dependencies should be analyzed in parallel for those languages that have it.

### **10.6.4.5 Coordination**

One of the success stories of non-transformational grammar is the slash-based analysis of nonlocal dependencies by Gazdar (1981b). This analysis made it possible for the first time to explain Ross's Across the Board Extraction (Ross 1967: Section 4.2.4.1). The examples were already discussed on page 201 and are repeated here for convenience:

(74) a. The kennel which Mary made and Fido sleeps in has been stolen.

(= S/NP & S/NP)


The generalization is that two (or more) constituents can be coordinated if they have identical syntactic categories and identical slash values. This explains why *which* and *in which* in (74a,b) can fill two positions in the respective clauses. Now, theories that do not use a slash feature for the percolation of information about missing elements have to find different ways to make sure that all argument slots are filled and that the correct correspondence between extracted elements and the respective argument role is established. Note that this is not straightforward in models like the one suggested by van Trijp, since he has to allow the preposition *in* to be combined with some material to the left of it that is simultaneously also the object of *made*. Usually an NP cannot simply be used by two different heads as their argument. As an example consider (75a):

	- b. John said about the cheese that I like it.

If it would be possible to use material several times, a structure for (75a) would be possible in which *the cheese* is the object of the preposition *about* and of the verb *like*. This sentence, however, is totally out: the pronoun *it* has to be used to fill the object slot.

### **10.6.4.6 Discontinuous constituents and performance models**

Van Trijp points out that SBCG does not have a performance model and contrasts this with FCG. On page 252 he states:

So parsing starts by segmenting the utterance into discrete forms, which are then categorized into words by morphological and lexical constructions, and which can then be grouped together as phrases (see Steels, 2011b, for a detailed account of lexico-phrasal processing in FCG). So the parser will find similar constituents for all four utterances, as shown in examples (21–24). Since auxiliary-*do* in example (24) falls outside the immediate domain of the VP, it is not yet recognized as a member of the VP.

All of these phrases are disconnected, which means that the grammar still has to identify the relations between the phrases. (van Trijp 2014: 252)

In his (21)–(24), van Trijp provides several tree fragments that contain NPs for subject and object and states that these have to be combined in order to analyze the sentences he discusses. This is empirically inadequate: if FCG does not make the competence/performance distinction, then the way utterances are analyzed should reflect the way humans process language (and this is what is usually claimed about FCG). However, all we know about human language processing points towards an incremental processing, that is, we process information as soon as it is available. We start to process the first word taking into account all of the relevant aspects (phonology, stress, part of speech, semantics, information structure) and come up with an hypothesis about how the utterance could proceed. As soon as we have two words processed (in fact even earlier: integration already happens during the processing of words) we integrate the second word into what we know already and continue to follow our hypothesis, or revise it, or simply fail. See Section 15.2 for details on processing and the discussion of experiments that show that processing is incremental. So, we have to say that van Trijp's analysis fails on empirical grounds: his modeling of performance aspects is not adequate.

The parsing scheme that van Trijp describes is pretty much similar to those of computational HPSG parsers, but these usually come without any claims about human performance. Modeling human performance is rather complex since a lot of factors play a role. It is therefore reasonable to separate competence and performance and continue to work the way it is done in HPSG and FCG. This does not mean that performance aspects should not be modeled, in fact psycholinguistic models using HPSG have been developed in the past (Konieczny 1996), but developing both a grammar with large coverage and the performance model that combines with it demands a lot of resources.

### **10.6.4.7 Discontinuity vs. Subject-Head and Head-Filler Schema**

I now turn to parsimony: van Trijp uses a subject-verb anchor construction that combines the subject and the main verb. Because of examples like (76) it must be possible to have discontinuous subject-verb constructions:<sup>31</sup>

(76) Peter often reads books.

	- b. Peter has read the book.

<sup>31</sup>Unless modals and tense auxiliaries are treated as main verbs (which they should not in English), constructions with modals seem to be another case where the subject and the main verb are not adjacent:

But if such constructions can be discontinuous one has to make sure that (77b) cannot be an instantiation of the subject-verb construction:

	- b. \* I the boy think left.

Here it is required to have some adjacency between the subject and the verb it belongs to, modulo some intervening adverbials. This is modeled quite nicely in phrase structure grammars that have a VP node. Whatever the internal structure of such a VP node may be, it has to be adjacent to the subject in sentences like (76) and (77a) above. The dislocated element has to be adjacent to the complex consisting of subject and VP. This is what the Filler-Head Schema does in HPSG and SBCG. Van Trijp criticizes SBCG for having to stipulate such a schema, but I cannot see how his grammar can be complete without a statement that ensures the right order of elements in sentences with fronted elements.

Van Trijp stated that FCG differs from what he calls generative approaches in that it does not want to characterize only the well-formed utterances of a language. According to him, the parsing direction is much more liberal in accepting input than other theories. So it could well be that he is happy to find a structure for (77b). Note though that this is incompatible with other claims made by van Trijp: he argued that FCG is superior to other theories in that it comes with a performance model (or rather in not separating competence from performance at all). But then (77b) should be rejected both on competence and performance grounds. It is just unacceptable and speakers reject it for whatever reasons. Any sufficiently worked out theory of language has to account for this.

### **10.6.4.8 Restricting discontinuity**

There is a further problem related to discontinuity. If one does not restrict continuity, then constituent orders like (78b) are admitted by the grammar:

	- b. \* Deshalb therefore klärt resolves dass that ob whether Peter Peter Klaus Klaus kommt comes spielt. plays

The interesting thing about the word salad in (78b) is that the constituent order within the *dass* clause and within the *ob* clause is correct. That is, the complementizer precedes the subject, which in turn precedes the verb. The problem is that the constituents of the two clauses are mixed.

In a model that permits discontinuous constituents, one cannot require that all parts of an argument have to be arranged after all parts that belong to another argument since discontinuity is used to account for nonlocal dependencies. So, it must be possible to have *Klaus* before other arguments (or parts of other arguments) since *Klaus* can be extracted. An example of mixing parts of phrases is given in (79):

(79) Dieses this Buch book hat has der the Mann man mir me versprochen, promised seiner his Frau wife zu to geben, give der who gestern yesterday hier here aufgetreten performed ist. is

'The man who performed here yesterday promised me to give this book to his wife.'

We see that material that refers to *der Mann* 'the man', namely the relative clause *der gestern hier aufgetreten ist* 'who performed here yesterday', appears to the right. And the object of *geben* 'to give', which would normally be part of the phrase *dieses Buch seiner Frau zu geben* 'this book his wife to give' appears to the left. So, in general it is possible to mix parts of phrases, but this is possible in a very restricted way only. Some dependencies extend all the way to the left of certain units (fronting) and others all the way to the right (extraposition). Extraposition is clause-bound, while extraction is not. In approaches like GPSG, HPSG and SBCG, the facts are covered by assuming that constituents for a complete clause are continuous apart from constituents that are fronted or extraposed. The fronted and extraposed constituents are represented in slash and extra (Keller 1995; Müller 1999b: Section 13.2; Crysmann 2013), respectively, rather than in valence features, so that it is possible to require of constituents that have all their valents saturated to be continuous (Müller 1999c: 294).

Summing up the discussion of parsimony, it has to be said that van Trijp has to provide the details on how continuity is ensured. The formalization of this is not trivial and only after this is done can FCG be compared with the slash-based approach.

In addition to all the points discussed so far, there is a logical flaw in van Trijp's argumentation. He states that:

whereas the filler-gap analysis cannot explain why *do*-support does not occur in *wh*-questions where the subject is assigned questioning focus, this follows naturally from the interaction of different linguistic perspectives in this paper's approach. (van Trijp 2014: 263)

The issue here is whether a filler-gap analysis or an analysis with discontinuous constituents is suited better for explaining the data. A correct argumentation against the filler-gap analysis would require a proof that information structural or other functional constraints cannot be combined with this analysis. This proof was not provided and in fact I think it cannot be provided since there are approaches that integrate information structure. Simply pointing out that a theory is incomplete does not falsify a theory. This point was already made in my review of Boas (2003) and in a reply to Boas (2014). See Müller (2005a: 655–656), Müller (2007a: Chapter 20), and Müller & Wechsler (2014b: Footnote 15).

The conclusion about the FCG analysis of nonlocal dependencies is that there are some empirical flaws that can be easily fixed or assumptions that can simply be dropped (role of *do* as object marker, claim that the initial position in English fronting construction is the topic), some empirical shortcomings (coordination, admittance of illformed

utterances with discontinuous constituents), some empirical problems when the analysis is extended to other languages (scope of adjuncts in German), and the parsimony of the analyses is not really comparable since the restrictions on continuity are not really worked out (or at least not published). If the formalization of restrictions on continuity in FCG turns out to be even half as complex as the formalization that is necessary for accounts of nonlocal dependencies (extraction and extraposition) in linearization-based HPSG that Reape (2000) suggested,<sup>32</sup> the slash-based analysis would be favorable.

In any case, I do not see how nonlocal dependencies could be used to drive a wedge between SBCG and FCG. If there are functional considerations that have to be taken into account, they should be modeled in both frameworks. In general, FCG should be more restrictive than SBCG since FCG claims to integrate a performance model, so both competence and performance constraints should be operative. I will come back to the competence/performance distinction in the following section, which is a more general comparison of SBCG and FCG.

### **10.6.4.9 Comparison to Sign-Based Construction Grammar/HPSG**

According to van Trijp (2013), there are the differences shown in Table 10.1. These differences will be discussed in the following subsections.


Table 10.1: Differences between SBCG and FCG according to van Trijp (2013: 112)

10.6.4.9.1 Competence/performance distinction

As for the linguistic approach, the use of the term *generative* is confusing. What van Trijp means – and also explains in the paper – is the idea that one should separate

<sup>32</sup>See Kathol & Pollard (1995) for a linearization-based account of extraposition. This account is implemented in the Babel System (Müller 1996c). See (Müller 1999c) on restricting discontinuity. Linearization-based approaches were argued to not be able to account for apparent multiple frontings in German (Müller 2005c, 2023a) and hence linearization-based approaches were replaced by more traditional variants that allow for continuous constituents only.

competence and performance. We will deal with both the generative-enumerative vs. constraint-based view and with the competence/performance distinction in more detail in the Chapters 14 and 15, respectively. Concerning the cognitive-functional approach, van Trijp writes:

The goal of a cognitive-functional grammar, on the other hand, is to explain how speakers express their conceptualizations of the world through language (= *production*) and how listeners analyze utterances into meanings (= *parsing*). Cognitivefunctional grammars therefore implement both a competence and a processing model. (van Trijp 2013: 90)

It is true that HPSG and SBCG make a competence/performance distinction (Sag & Wasow 2011). HPSG theories are theories about the structure of utterances that are motivated by distributional evidence. These theories do not contain any hypothesis regarding brain activation, planning of utterances, processing of utterances (garden path effects) and similar things. In fact, none of the theories that are discussed in this book contains an explicit theory that explains all these things. I think that it is perfectly legitimate to work in this way: it is legitimate to study the structure of words without studying their semantics and pragmatics, it is legitimate to study phonology without caring about syntax, it is legitimate to deal with specific semantic problems without caring about phonology and so on, provided there are ways to integrate the results of such research into a bigger picture. In comparison, it is wrong to develop models like those developed in current versions of Minimalism (called Biolinguistics), where it is assumed that utterances are derived in phases (NPs, CPs, depending on the variant of the theory) and then shipped to the interfaces (spell out and semantic interpretation). This is not what humans do (see Chapter 15).<sup>33</sup> But if we are neutral with respect towards such issues, we are fine. In fact, there is psycholinguistic work that couples HPSG grammars to performance models (Konieczny 1996) and similar work exists for TAG (Shieber & Johnson 1993, Demberg & Keller 2008).

Finally, there is also work in Construction Grammar that abstracts away from performance considerations. For instance, Adele Goldberg's book from 1995 does not contain a worked out theory of performance facts. It contains boxes in which grammatical functions are related to semantic roles. So this basically is a competence theory as well. Of course there are statements about how this is connected to psycholinguistic findings, but this is also true for theories like HPSG, SBCG and Simpler Syntax (Jackendoff 2011: 600) that explicitly make the competence/performance distinction.

### 10.6.4.9.2 Mathematical formalization vs. implementation

The difference between mathematical and computational formalization is a rather strange distinction to make. I think that a formal and precise description is a prerequisite for implementation (see the discussion in Section 3.6.2 and Section 4.7.2). Apart from this, a computer implementation of SBCG is trivial, given the systems that we have for processing HPSG grammars. In order to show this, I want to address one issue that van Trijp

<sup>33</sup>Attempts to integrate current Minimalist theories with psycholinguistic findings (Phillips 2003) are incompatible with core principles of Minimalism like the *No Tampering Condition* of Chomsky (2008).

#### 10 Construction Grammar

discusses. He claims that SBCG cannot be directly implemented. On issues of complexity of constraint solving systems he quotes (Levine & Meurers 2006: Section 4.2.2):

Actual implementations of HPSG typically handle the problem by guiding the linguistic processor using a (rule-based) phrase structure backbone, but the disadvantage of this approach is that the "organization and formulation of the grammar is different from that of the linguistic theory" (Levine & Meurers 2006: Section 4.2.2). (van Trijp 2013: 108)

He concludes:

Applying all these observations to the operationalization of SBCG, we can conclude that an SBCG grammar is certainly amenable for computational implementation because of its formal explicitness. There are at least two computational platforms available, mostly used for implementing HPSG-based grammars, whose basic tenets are compatible with the foundations of SBCG: LKB (Copestake 2002) and TRALE (Richter 2006). However, none of these platforms supports a 'direct' implementation of an SBCG grammar as a general constraint system, so SBCG's performance-independence hypothesis remains conjecture until proven otherwise.

There are two issues that should be kept apart here: efficiency and faithfulness to the theory. First, as Levine and Meurers point out, there were many constraint solving systems at the beginning of the 90s. So there are computer systems that can and have been used to implement and process HPSG grammars. This is very valuable since they can be used for direct verification of specific theoretical proposals. As was discussed by Levine and Meurers, trying to solve constraints without any guidance is not the most efficient way to deal with the parsing/generation problem. Therefore, additional control-structure was added. This control structure is used for instance in a parser to determine the syntactic structure of a phrase and other constraints will apply as soon as there is sufficient information available for them to apply. For instance, the assignment of structural case happens once the arguments of a head are realized. Now, is it bad to have a phrase structure backbone? One can write down phrase structure grammars that use phrase structure rules that have nothing to do with what HPSG grammars usually do. The systems TRALE (Meurers, Penn & Richter 2002, Penn 2004) and LKB will process them. But one is not forced to do this. For instance, the grammars that I developed for the Core-Gram project (Müller 2013b, 2015c) are very close to the linguistic theory. To see that this is really the case, let us look at the Head-Argument Schema. The Head-Argument Schema is basically the type *head-argument-phrase* with certain type constraints that are partly inherited from its supertypes. The type with all the constraints was given on page 282 and is repeated here as (80):

(80) (syntactic) constraints on *head-complement-phrase*: *head-complement-phrase* synsem|loc|cat head 1 comps 2 head-dtr|synsem|loc|cat head 1 comps 2 ⊕ ⟨ 3 ⟩ non-head-dtrs ⟨ [ synsem 3 ] ⟩ 

This can be translated into phrase structure grammar rules in a straight-forward way:

The left hand side of the rule is the mother node of the tree, that is, the sign that is licensed by the schema provided that the daughters are present. The right hand side in (81a) consists of the head daughter 4 followed by the non-head daughter 5 . We have the opposite order in (81b), that is, the head daughter follows the non-head daughter. The two orders correspond to the two orders that are permitted by LP-rules: the head precedes its argument if it is marked initial+ and it follows it if it is marked initial−.

The following code shows how (81b) is implemented in TRALE:

```
arg_h rule (head_complement_phrase,
            synsem:loc:cat:head:initial:minus,
            head_dtr:HeadDtr,
            non_head_dtrs:[NonHeadDtr])
  ===>
cat> NonHeadDtr,
```
cat> HeadDtr.

A rule starts with an identifier that is needed for technical reasons like displaying intermediate structures in the parsing process in debugging tools. A description of the mother

#### 10 Construction Grammar

node follows and after the arrow we find a list of daughters, each introduced by the operator cat>. <sup>34</sup> Structure sharing is indicated by values with capital letters. The above TRALE rule is a computer-readable variant of (81b) additionally including the explicit specification of the value of initial.

Now, the translation of a parallel schema using a mother feature like (82a) into a phrase structure rule is almost as simple:

$$\begin{array}{l|l} \text{head-complement-cc} & \begin{bmatrix} \text{head-complement-cc} \\ \text{NOTHERS | SVSEM } \text{[LOC } \begin{bmatrix} \text{HEAAD} & \box{\Box} \\ \text{COMPS } \text{[}\Sigma\end{bmatrix} \end{bmatrix} \\\\ \text{HOAD-DTR} & \begin{bmatrix} \text{HEAAD} & \boxed{\text{HEAAD}} \\ \text{COMP-HEAD-DTR} \left\{ \begin{bmatrix} \text{HEAAD} & \boxed{\Box} \\ \text{COPS } \text{[}\Sigma\} \oplus \{\Box\} \end{bmatrix} \right\} \\\\ \text{NON-HEAD-DTR} & \left\{ \begin{bmatrix} \text{read-complement-cc} \\ \text{MOITHER} \left[ \text{[} \text{SYNSEM]LOC} \left\{ \text{CAT} \begin{bmatrix} \text{HEAAD} & \boxed{\Box} \\ \text{COPS } \text{[}\Sigma\end{bmatrix} \right\} \end{bmatrix} \right] \\\\ \text{b.} & \begin{bmatrix} \text{HEAAD-DTR} \left\{ \text{[} \text{SYNSEM]} \left\{ \text{LOC} \left| \text{CAT} \begin{bmatrix} \text{HEAAD} & \boxed{\Box} \\ \text{COPS } \text{[}\Sigma\} \oplus \{\Box\} \end{bmatrix} \right\} \end{bmatrix} \end{array} \end{array}$$

(82b) is only one of the two phrase structure rules that correspond to (82a), but since the other one only differs from (82b) in the ordering of 4 and 5 , it is not given here.

For grammars in which the order of the elements corresponds to the observable order of the daughters in a dtrs list, the connection to phrase structure rules is even simpler:

(83) 1 → 2 where *construction* mother 1 dtrs 2 

The value of dtrs is a list and hence 2 stands for the list of daughters on the right hand side of the phrase structure rule as well. The type *construction* is a supertype of all constructions and hence (83) can be used to analyze all phrases that are licensed by the grammar. In fact, (83) is one way to put the meta constraint in (33).

So, this shows that the version of SBCG that has been developed by Sag (2012) has a straightforward implementation in TRALE.<sup>35</sup> The question remains whether "SBCG's performance-independence hypothesis remains conjecture until proven otherwise" as

<sup>34</sup>Other operators are possible in TRALE. For instance, sem\_head can be used to guide the generator. This is control information that has nothing to do with linguistic theory and not necessarily with the way humans process natural language. There is also a cats operator, which precedes lists of daughters. This can be used to implement phrase structures with more than one non-head daughter.

<sup>35</sup>A toy fragment of English using a mother feature and phrase structure rules with specifications of the kind given above can be downloaded at https://hpsg.hu-berlin.de/Fragments/SBCG-TRALE/.

van Trijp sees it. The answer is: it is not a conjecture since any of the old constraintsolving systems of the nineties could be used to process SBCG. The question of whether this is efficient is an engineering problem that is entirely irrelevant for theoretical linguistics. Theoretical linguistics is concerned with human languages and how they are processed by humans. So whether some processing system that does not make any claims about human language processing is efficient or not is absolutely irrelevant. Phrase structure-based backbones are therefore irrelevant as well, provided they refer to the grammar as described in theoretical work.

Now, this begs the question whether there is a contradiction in my claims. On page 337 I pointed out that SBCG is lacking a formalization in Richter's framework (Richter 2004). Richter and also Levine & Meurers (2006) pointed out that there are problems with certain theoretically possible expressions and it is these expressions that mathematical linguists care about. So the goal is to be sure that any HPSG grammar has a meaning and that it is clear what it is. Therefore, this goal is much more foundational than writing a single grammar for a particular fragment of a language. There is no such foundational work for FCG since FCG is a specific toolkit that has been used to implement a set of grammars.

10.6.4.9.3 Static constraints vs. dynamic mappings and signature + grammar vs. openendedness

On very interesting feature of Fluid Construction Grammar is its fluidity, that is there are certain constraints that can be adapted if there is pressure, the inventory of the theory is open-ended, so categories and features can be added if need be.

Again, this is not a fundamental difference between HPSG/SBCG and FCG. An HPSG grammar fragment of a specific language is a declarative representation of linguistic knowledge and as such it of course just represents a certain fragment and does not contain any information how this set of constraints evolved or how it is acquired by speakers. For this we need specific theories about language evolution/language change/language acquisition. This is parallel to what was said about the competence/performance distinction, in order to account for language evolution we would have to have several HPSG grammars and say something about how one developed from the other. This will involve weighted constraints, it will involve recategorization of linguistic items and lots more.<sup>36</sup> So basically HPSG has to be extended, has to be paired with a model about language evolution in the very same way as FCG is.

<sup>36</sup>There are systems that use weighted constraints. We had a simple version of this in the German HPSG grammar that was developed in Verb*mobil* project (Müller & Kasper 2000) already. Further theoretical approaches to integrate weighted constraints are Brew (1995) and more recently Guzmán Naranjo (2015). Usually such weighted constraints are not part of theoretical papers, but there are exceptions as for instance the paper by Briscoe and Copestake about lexical rules (Briscoe & Copestake 1999).

### 10.6.4.9.4 Theoretical physics vs. Darwinian evolutionary theory

Van Trijp compares SBCG and FCG and claims that SBCG follows the model of theoretical physics – like Chomsky does –, while FCG adopts a Darwinian model of science – like Croft does –, the difference being that SBCG makes certain assumptions that are true of all languages, while FCG does not make any a priori assumptions. The fundamental assumptions made in both theories are that the objects that we model are best described by feature value pairs (a triviality). FCG assumes that there is always a syntactic and a semantic pole (fundamental assumption in the system) and researchers working in HPSG/SBCG assume that if languages have certain phenomena, they will be analyzed in similar ways. For instance, if a language has nonlocal dependencies, these will be analyzed via the slash mechanism. However, this does not entail that one believes that grammars of all languages have a slash feature. And in fact, there may even be languages that do not have valence features (Koenig & Michelson 2010), which may be a problem for FCG since it relies on the SYN-pole for the matching phase. So as far as SBCG is concerned, there is considerable freedom to choose features that are relevant in an analysis, and of course additional features and types can be assumed in case a language is found that provides evidence for this. The only example of a constraint provided by van Trijp that is possibly too strong is the locality constraint imposed by the mother feature. The idea about this feature is that everything that is of relevance in a more nonlocal context has to be passed up explicitly. This is done for nonlocal dependencies (via slash) and for instance also for information concerning the form of a preposition inside of a PP (via pform or more recently via form). Certain verbs require prepositional objects and restrict the form of the preposition. For instance, *wait* has to make sure that its prepositional object has the preposition *for* in it. Since this information is usually available only at the preposition, it has to be passed up to the PP level in order to be directly selectable by the governing verb.

### (84) I am waiting for my man.

So, assuming strict locality of selection requires that all phenomena that cannot be treated locally have to be analyzed by passing information up. Assuming strict locality is a design decision that does not have any empirical consequences, as far as it does not rule out any language or construction in principle. It just requires that information has to be passed up that needs to be accessed at higher nodes. As I have shown in Section 10.6.2, the locality constraint is easily circumvented even within SBCG and it makes the analysis of idioms unnecessarily complicated and unintuitive, so I suggest dropping the mother feature. But even if mother is kept, it is not justified to draw a distinction between SBCG and FCG along the lines suggested by van Trijp.

Independent of the mother issue, the work done in the CoreGram project (Müller 2013b, 2015c) shows that one can derive generalizations in a bottom-up fashion rather than imposing constraints on grammars in a top-down way. The latter paper discusses Croft's methodological considerations and shows how methodological pitfalls are circumvented in the project. HPSG/SBCG research differs from work in Chomskyan frameworks in not trying to show that all languages are like English or Romance or German or

whatever, rather languages are treated on their own as it is common in the Construction Grammar community. This does not imply that there is no interest in generalizations and universals or near universals or tendencies, but again the style of working and the rhetoric in HPSG/SBCG is usually different from the ones in Mainstream Generative Grammar. Therefore, I think that the purported difference between SBCG and FCG does not exist.

### 10.6.4.9.5 Permissiveness of the theories

Van Trijp claims that HPSG/SBCG is a "generative grammar" since its aim is to account for and admit only grammatical sentences. FCG on the other hand is more permissive and tries to get the most out of the input even if it is fragmentary or ungrammatical (see also Steels 2013: 166). While it is an engineering decision to be able to parse ungrammatical input – and there are most certainly systems for the robust processing of HPSG grammars (Kiefer, Krieger & Nederhof 2000, Copestake 2007), it is also clear that humans cannot parse everything. There are strong constraints whose violations cause measurable effects in the brain. This is something that a model of language (that includes competence and performance factors or does not make the difference at all) has to explain. The question is what the cause of deviance is: is it processing complexity? Is it a category mismatch? A clash in information structure? So, if FCG permits structures that are not accepted by human native speakers and that do not make any sense whatsoever, additional constraints have to be added. If they are not added, the respective FCG theory is not an adequate theory of the language under consideration. Again, there is no difference between HPSG/SBCG and FCG.

### 10.6.4.9.6 A note on engineering

A problematic property of work done in FCG is that linguistic and engineering aspects are mixed.<sup>37</sup> Certain bookkeeping features that are needed only for technical reasons appear in linguistic papers, technical assumptions that are made to get a parser running are mixed with linguistic constraints. Bit vector encodings that are used to represent case information are part of papers about interesting case systems. There is certainly nothing wrong with bit vector encodings. They are used in HPSG implementations as well (Reape 1991: 55; Müller 1996c: 269), but this is not mixed into the theoretical papers.

It was a big breakthrough in the 80s when theoretical linguists and computational linguists started working together and developed declarative formalisms that were independent of specific parsers and processing systems. This made it possible to take over insights from a lot of linguists who were not concerned with the actual implementation but took care of finding linguistic generalizations and specifying constraints. Since this

<sup>37</sup>This is not a problem if all FCG papers are read as papers documenting the FCG-system (see Footnote 26 on page 346) since then it would be necessary to include these technical details. If the FCG papers are to be read as theoretical linguistics papers that document a certain Construction Grammar analysis, the Lisp statements and the implementational details are simply an obstacle.

separation is given up in FCG, it will remain an engineering project without much appeal to the general linguist.

# **10.7 Summary and classification**

There are currently three formalized variants of Construction Grammar: Sign-Based Construction Grammar, Embodied Construction Grammar, and Fluid Construction Grammar. The first two variants can be viewed as notational variants of (Constructional) HPSG (for SBCG with regard to this point, see Sag (2007: 411) and Sag (2010: 486)), or put differently, sister theories of HPSG. This is also true to a large extend for FCG, although van Trijp (2013) spends 25 pages working out the alleged differences. As I have shown in Section 10.6.4, HPSG and FCG are rather similar and I would say that these theories are sister theories as well.

Due to the origins of all three theories, respective analyses can differ quite considerably: HPSG is a strongly lexicalized theory, where phrasal dominance schemata have only been increasingly more used in the last ten years under the influence of Ivan Sag. The phrasal dominance schemata that Ivan Sag uses in his work are basically refinements of schemata that were present in earlier versions of HPSG. Crucially, all phenomena that interact with valence receive a lexical analysis (Sag, Boas & Kay 2012: Section 2.3). In CxG, on the other hand, predominantly phrasal analyses are adopted due to the influence of Adele Goldberg.

As already emphasized in Chapter 9, these are only tendencies that do not apply to all researchers working in the theories in question.

## **Exercises**

1. Find three examples of utterances whose meaning cannot be derived from the meaning of the individual words. Consider how one could analyze these examples in Categorial Grammar (yes, Categorial Grammar).

# **Further reading**

There are two volumes on Construction Grammar in German: Fischer & Stefanowitsch (2006) and Stefanowitsch & Fischer (2008). Deppermann (2006) discusses Construction Grammar from the point of view of conversational analysis. The 37(3) volume of the *Zeitschrift für germanistische Linguistik* from 2009 was also devoted to Construction Grammar. Goldberg (2003a) and Michaelis (2006) are overview articles in English. Goldberg's books constitute important contributions to Construction Grammar (1995, 2006, 2009). Goldberg (1995) has argued against lexical analyses such as those common in GB, LFG, CG, HPSG, and DG. These arguments can be invalidated, however, as will be shown in Section 21.7.1. Sag (1997), Borsley (2006), Jacobs (2008) and Müller & Lipenkova (2009) give examples of constructions that require a phrasal analysis if one wishes to avoid postulating empty elements. Jackendoff (2008) discusses the noun-prepositionnoun construction that can only be properly analyzed as a phrasal construction (see Section 21.10). The discussion on whether argument structure constructions should be analyzed phrasally or lexically (Goldberg 1995, 2006, Müller 2006) culminated in a series of papers (Goldberg 2013a) and a target article by Müller & Wechsler (2014a) with several responses in the same volume. Müller (2018a) discusses phrasal LFG approaches to benefactives and resultatives and compares them with lexical HPSG proposals showing one more time that phrasal approaches face problems. Müller (2021b) compares HPSG and Construction Grammar. The HPSG handbook (Müller et al. 2021) assumes Constructional HPSG (Sag 1997). The handbook has several chapters with sections in which Constructional HPSG is compared with Sign-Based Construction Grammar (Sag 2012).

Tomasello's publications on language acquisition (Tomasello 2000, 2003, 2005, 2006c) constitute a Construction Grammar alternative to the Principle & Parameters theory of acquisition as it does not have many of the problems that P&P analyses have (for more on language acquisition, see Chapter 16). For more on language acquisition and Construction Grammar, see Behrens (2009).

Dąbrowska (2004) looks at psycholinguistic constraints for possible grammatical theories.

# **11 Dependency Grammar**

Dependency Grammar (DG) is the oldest framework described in this book. According to Hudson (2021: 1452), the basic assumptions made today in Dependency Grammar were already present in the work of the Hungarian Sámuel Brassai in 1873 (see Imrényi 2013), the Russian Aleksej Dmitrievsky in 1877 and the German Franz Kern (1884). The most influential version of DG was developed by the French linguist Lucien Tesnière (1893– 1954). His foundational work *Eléments de syntaxe structurale* 'Elements of structural syntax' was basically finished in 1938 only three years after Ajdukiewicz's paper on Categorial Grammar (1935), but the publication was delayed until 1959, five years after his death. Since valence is central in Dependency Grammar, it is sometimes also referred to as Valence Grammar. Tesnière's ideas are wide-spread nowadays. The conceptions of valence and dependency are present in almost all of the current theories (Ágel & Fischer 2010: 262–263, 284).

Although there is some work on English (Anderson 1971, Hudson 1984), Dependency Grammar is most popular in central Europe and especially so in Germany (Engel 1996: 56–57). Ágel & Fischer (2010: 250) identified a possible reason for this: Tesnière's original work was not available in English until very recently (Tesnière 2015), but there has been a German translation for more than 35 years now (Tesnière 1980). Since Dependency Grammar focuses on dependency relations rather than linearization of constituents, it is often felt to be more appropriate for languages with freer constituent order, which is one reason for its popularity among researchers working on Slavic languages: the New Prague School represented by Sgall, Hajičová and Panevova developed Dependency Grammar further, beginning in the 1960s (see Hajičová & Sgall 2003 for an overview). Igor A. Meľčuk and A. K. Žolkovskij started in the 1960s in the Soviet Union to work on a model called Meaning–Text Theory, which was also used in machine translation projects (Mel'čuk 1964, 1981, 1988, Kahane 2003). Mel'čuk left the Soviet Union towards Canada in the 1970s and now works in Montréal.

Dependency Grammar is very wide-spread in Germany and among scholars of German linguistics worldwide. It is used very successfully for teaching German as a foreign language (Helbig & Buscha 1969, 1998). Helbig and Buscha, who worked in Leipzig, East Germany, started to compile valence dictionaries (Helbig & Schenkel 1969) and later researchers working at the Institut für Deutsche Sprache (Institute for German Language) in Mannheim began similar lexicographic projects (Schumacher et al. 2004).

The following enumeration is a probably incomplete list of linguists who are/were based in Germany: Vilmos Ágel (2000), Kassel; Klaus Baumgärtner (1965, 1970), Leipzig later Stuttgart; Ulrich Engel (1977, 2014), IDS Mannheim; Hans-Werner Eroms (1985, 1987, 2000), Passau; Heinz Happ, Tübingen; Peter Hellwig (1978, 2003), Heidelberg;

Jürgen Heringer (1996), Augsburg; Jürgen Kunze (1968, 1975), Berlin; Henning Lobin (1993), Gießen; Klaus Schubert (1987), Hildesheim; Heinz Josef Weber (1997), Trier; Klaus Welke (1988), Welke (2011), Humboldt University Berlin; Edeltraud Werner (1993), Halle-Wittenberg.

Although work has been done in many countries and continuously over the decades since 1959, a periodical international conference was established as late as 2011.1,<sup>2</sup>

From early on, Dependency Grammar was used in computational projects. Meľčuk worked on machine translation in the Soviet Union (Mel'čuk 1964) and David G. Hays worked on machine translation in the United States (Hays & Ziehe 1960). Jürgen Kunze, based in East Berlin at the German Academy of Sciences, where he had a chair for computational linguistics, also started to work on machine translation in the 1960s. A book that describes the formal background of the linguistic work was published as Kunze (1975). Various researchers worked in the Collaborative Research Center 100 *Electronic linguistic research* (SFB 100, Elektronische Sprachforschung) from 1973–1986 in Saarbrücken. The main topic of this SFB was machine translation as well. There were projects on Russian to German, French to German, English to German, and Esperanto to German translation. For work from Saarbrücken in this context see Klein (1971), Rothkegel (1976), and Weissgerber (1983). Muraki et al. (1985) used Dependency Grammar in a project that analyzed Japanese and generated English. Richard Hudson started to work in a dependency grammar-based framework called Word Grammar in the 1980s (Hudson 1984, 2007) and Sleator and Temperly have been working on Link Grammar since the 1990s (Sleator & Temperley 1991, Grinberg et al. 1995). Fred Karlsson's Constraint Grammars (1990) are developed for many languages (bigger fragments are available for Danish, Portuguese, Spanish, English, Swedish, Norwegian, French, German, Esperanto, Italian, and Dutch) and are used for school teaching, corpus annotation and machine translation. An online demo is available at the project website.<sup>3</sup>

In recent years, Dependency Grammar became more and more popular among computational linguists. The reason for this is that there are many annotated corpora (tree banks) that contain dependency information.<sup>4</sup> Statistical parsers are trained on such tree banks (Yamada & Matsumoto 2003, Attardi 2006, Nivre 2003, Kübler et al. 2009, Bohnet 2010). Many of the parsers work for multiple languages since the general approach is language independent. It is easier to annotate dependencies consistently since there are fewer possibilities to do so. While syntacticians working in constituency-based models may assume binary branching or flat models, high or low attachment of adjuncts, empty elements or no empty elements and argue fiercely about this, it is fairly clear what the dependencies in an utterance are. Therefore it is easy to annotate consistently and train statistical parsers on such annotated data.

Apart from statistical modeling, there are also so-called deep processing systems, that is, systems that rely on a hand-crafted, linguistically motivated grammar. I already men-

<sup>1</sup> http://depling.org/. 2018-02-20.

<sup>2</sup>A conference on Meaning–Text Theory has taken place biannually since 2003.

<sup>3</sup> http://beta.visl.sdu.dk/constraint\_grammar. 2018-02-20.

<sup>4</sup>According to Kay (2000), the first treebank ever was developed by Hays and did annotate dependencies.

tioned Meľčuk's work in the context of machine translation; Hays & Ziehe (1960) had a parser for Russian; Starosta & Nomura (1986) developed a parser that was used with an English grammar, Jäppinen, Lehtola & Valkonen (1986) developed a parser that was demoed with Finnish, Hellwig (1986, 2003, 2006) implemented grammars of German in the framework of Dependency Unification Grammar, Hudson (1989) developed a Word Grammar for English, Covington (1990) developed a parser for Russian and Latin, which can parse discontinuous constituents, and Menzel (1998) implemented a robust parser of a Dependency Grammar of German. Other work on computational parsing to be mentioned is Kettunen (1986), Lehtola (1986), Menzel & Schröder (1998b). The following is a list of languages for which Dependency Grammar fragments exist:


#### 11 Dependency Grammar

The Constraint Grammar webpage<sup>5</sup> additionally lists grammars for Basque, Catalan, English, Finnish, German, Italian, Sami, and Swedish.

# **11.1 General remarks on the representational format**

### **11.1.1 Valence information, nucleus and satellites**

The central concept of Dependency Grammar is valence (see Section 1.6). The central metaphor for this is the formation of stable molecules, which is explained in chemistry with reference to layers of electrons. A difference between chemical compounds and linguistic structures is that the chemical compounding is not directed, that is, it would not make sense to claim that oxygen is more important than hydrogen in forming water. In contrast to this, the verb is more important than the nominal phrases it combines with to form a complete clause. In languages like English and German, the verb determines the form of its dependents, for instance their case.

One way to depict dependencies is shown in Figure 11.1. The highest node is the verb *reads*. Its valence is a nominative NP (the subject) and an accusative NP (an object). This

Figure 11.1: Analysis of *The child reads a book.*

is depicted by the dependency links between the node representing the verb and the nodes representing the respective nouns. The nouns themselves require a determiner, which again is shown by the dependency links to *the* and *a* respectively. Note that the analysis presented here corresponds to the NP analysis that is assumed in HPSG for instance, that is, the noun selects its specifier (see Section 9.1.1). It should be noted, though, that the discussion whether an NP or a DP analysis is appropriate also took place within the Dependency Grammar community (Hudson 1984: 90; Van Langendonck 1994, Hudson 2004). See Engel (1977) for an analysis with the N as head and Welke (2011: 31) for an analysis with the determiner as head.

The verb is the head of the clause and the nouns are called *dependents*. Alternative terms for head and dependent are *nucleus* and *satellite*, respectively.

<sup>5</sup> http://beta.visl.sdu.dk/constraint\_grammar\_languages.html, 2018-02-20.

An alternative way to depict the dependencies, which is used in the Dependency Grammar variant Word Grammar (Hudson 2007), is provided in Figure 11.2. This graph displays the grammatical functions rather than information about part of speech, but apart from this it is equivalent to the representation in Figure 11.1. The highest node in Figure 11.1 is labeled with the root arrow in Figure 11.2. Downward links are indicated by the direction of the arrows.

Figure 11.2: Alternative presentation of the analysis of *The child reads a book.*

A third form of representing the same dependencies provided in Figure 11.3 has the tree format again. This tree results if we pull the root node in Figure 11.2 upwards. Since we

Figure 11.3: Alternative presentation of the analysis of *The child reads a book.*

have a clear visualization of the dependency relation that represents the nucleus above the dependents, we do not need to use arrows to encode this information. However, some variants of Dependency Grammar – for instance Word Grammar – use mutual dependencies. So for instance, some theories assume that *his* depends on *child* and *child* depends on *his* in the analysis of *his child*. If mutual dependencies have to be depicted, either arrows have to be used for all dependencies or some dependencies are represented by downward lines in hierarchical trees and other dependencies by arrows.

Of course part of speech information can be added to the Figures 11.2 and 11.3, grammatical function labels could be added to Figure 11.1, and word order can be added to Figure 11.3.

The above figures depict the dependency relation that holds between a head and the respective dependents. This can be written down more formally as an -ary rule that is similar to phrase structure rules that were discussed in Chapter 2 (Gaifman 1965: 305; Hays 1964: 513; Baumgärtner 1970: 61; Heringer 1996: Section 4.1). For instance Baumgärtner suggests the general rule format in (1):

(1) → <sup>1</sup> . . . ∗ +<sup>2</sup> . . . ,ℎ 0 < ≤

The asterisk in (1) corresponds to the word of the category . In our example, would be V, the position of the '∗' would be taken by *reads*, and <sup>1</sup> and <sup>3</sup> would be N. Together with the rule in (2b) for the determiner-noun combination, the rule in (2a) would license the dependency tree in Figure 11.1.

(2) a. V → N ∗ N b. N → D ∗

Alternatively, several binary rules can be assumed that combine a head with its subject, direct object, or indirect object (Kahane 2009). Dependency rules will be discussed in more detail in Section 11.7.2, where dependency grammars are compared with phrase structure grammars.

## **11.1.2 Adjuncts**

Another metaphor that was used by Tesnière is the drama metaphor. The core participants of an event are the *actants* and apart from this there is the background, the stage, the general setting. The actants are the arguments in other theories and the stagedescribing entities are called *circumstants*. These circumstants are modifiers and usually analyzed as adjuncts in the other theories described in this book. As far as the representation of dependencies is concerned, there is not much of a difference between arguments and adjuncts in Dependency Grammar. Figure 11.4 shows the analysis of (3):

(3) The child often reads the book slowly.

Figure 11.4: Analysis of *The child often reads the book slowly.*

The dependency annotation uses a technical device suggested by Engel (1977) to depict different dependency relations: adjuncts are marked with an additional line upwards from the adjunct node (see also Eroms 2000). An alternative way to specify the argument/adjunct, or rather the actant/circumstant distinction, is of course an explicit specification of the status as argument or adjunct. So one can use explicit labels for adjuncts

and arguments as it was done for grammatical functions in the preceding. German grammars and valence dictionaries often use the labels E and A for *Ergänzung* and *Angabe*, respectively.

### **11.1.3 Linearization**

So far we have seen dependency graphs that had connections to words that were linearized in a certain order. The order of the dependents, however, is in principle not determined by the dependency and therefore a Dependency Grammar has to contain additional statements that take care of the proper linearization of linguistic objects (stems, morphemes, words). Engel (2014: 50) assumes the dependency graph in Figure 11.5 for the sentences in (4).<sup>6</sup>

	- b. Ich I war was gestern yesterday bei with Tom. Tom
	- c. Bei with Tom Tom war was ich I gestern. yesterday
	- d. Ich I war was bei with Tom Tom gestern. yesterday

Figure 11.5: Dependency graph for several orders of *ich*, *war*, *bei Tom*, and *gestern* 'I was with Tom yesterday.' according to Engel (2014: 50)

According to Engel (2014: 50), the correct order is enforced by surface syntactic rules as for instance the rules that states that there is always exactly one element in the Vorfeld in declarative main clauses and that the finite verb is in second position.7,<sup>8</sup> Furthermore,

<sup>6</sup> Engel uses Esub for the subject and Eacc, Edat, and Egen for the objects with respective cases.

<sup>7</sup> "Die korrekte Stellung ergibt sich dann zum Teil aus oberflächensyntaktischen Regeln (zum Beispiel: im Vorfeld des Konstativsatzes steht immer genau ein Element; das finite Verb steht an zweiter Stelle) […]"

<sup>8</sup> Engel (1970: 81) provides counterexamples to the claim that there is exactly one element in the *Vorfeld*. Related examples will be discussed in Section 11.7.1.

#### 11 Dependency Grammar

there are linearization rules that concern pragmatic properties, as for instance given information before new information. Another rule ensures that weak pronouns are placed into the Vorfeld or at the beginning of the Mittelfeld. This conception of linear order is problematic both for empirical and conceptual reasons and we will turn to it again in Section 11.7.1. It should be noted here that approaches that deal with dependency alone admit discontinuous realizations of heads and their dependents. Without any further constraints, Dependency Grammars would share a problem that was already discussed on page 343 in Section 10.6.3 on Embodied Construction Grammar and in Section 10.6.4.4 with respect to Fluid Construction Grammar: one argument could interrupt another argument as in Figure 11.6. In order to exclude such linearizations in languages in which

Figure 11.6: Unwanted analysis of *dass die Frauen Türen öffnen* 'that the women open doors'

they are impossible, it is sometimes assumed that analyses have to be projective, that is crossing branches like those in Figure 11.6 are not allowed. This basically reintroduces the concept of constituency into the framework, since this means that all dependents of a head have to be realized close to the head unless special mechanisms for liberation are used (see for instance Section 11.5 on nonlocal dependencies).<sup>9</sup> Some authors explicitly use a phrase structure component to be able to formulate restrictions on serializations of constituents (Gerdes & Kahane 2001, Hellwig 2003).

### **11.1.4 Semantics**

Tesnière already distinguished the participants of a verb in a way that was later common in theories of semantic roles. He suggested that the first actant is the agent, the second one a patient and the third a benefactive (Tesnière 2015: Chapter 106). Given that Dependency Grammar is a lexical framework, all lexical approaches to argument linking can

<sup>9</sup>While this results in units that are also assumed in phrase structure grammars, there is a difference: the units have category labels in phrase structure grammars (for instance NP), which is not the case in Dependency Grammars. In Dependency Grammars, one just refers to the label of the head (for instance the N that belongs to *child* in Figure 11.4) or one refers to the head word directly (for instance, the word *child* in Figure 11.3). So there are fewer nodes in Dependency Grammar representations (but see the discussion in Section 11.7.2.3).

be adopted. However, argument linking and semantic role assignment are just a small part of the problem that has to be solved when natural language expressions have to be assigned a meaning. Issues regarding the scope of adjuncts and quantifiers have to be solved and it is clear that dependency graphs representing dependencies without taking into account linear order are not sufficient. An unordered dependency graph assigns grammatical functions to a dependent of a head and hence it is similar in many respects to an LFG f-structure.<sup>10</sup> For a sentence like (25a) on page 232, repeated here as (5), one gets the f-structure in (25b) on page 232. This f-structure contains a subject (*David*), an object (*a sandwich*), and an adjunct set with two elements (*at noon* and *yesterday*).

(5) David devoured a sandwich at noon yesterday.

This is exactly what is encoded in an unordered dependency graph. Because of this parallel it comes as no surprise that Bröker (2003: 308) suggested to use glue semantics (Dalrymple, Lamping & Saraswat 1993; Dalrymple 2001: Chapter 8) for Dependency Grammar as well. Glue semantics was already introduced in Section 7.1.5.

There are some variants of Dependency Grammar that have explicit treatments of semantics. One example is Meaning–Text Theory (Mel'čuk 1988). Word Grammar is another one (Hudson 1990: Chapter 7; 2007: Chapter 5). The notations of these theories cannot be introduced here. It should be noted though that theories like Hudson's Word Grammar are rather rigid about linear order and do not assume that all the sentences in (4) have the same dependency structure (see Section 11.5). Word Grammar is closer to phrase structure grammar and therefore can have a semantics that interacts with constituent order in the way it is known from constituent-based theories.

# **11.2 Passive**

Dependency Grammar is a lexical theory and valence is the central concept. For this reason, it is not surprising that the analysis of the passive is a lexical one. That is, it is assumed that there is a passive participle that has a different valence requirement than the active verb (Hudson 1990: Chapter 12; Eroms 2000: Section 10.3; Engel 2014: 53–54).

Our standard example in (6) is analyzed as shown in Figure 11.7 on the next page.

(6) [dass] that der the Weltmeister world.champion geschlagen beaten wird is

'that the world champion is (being) beaten'

This figure is an intuitive depiction of what is going on in passive constructions. A formalization would probably amount to a lexical rule for the personal passive. See Hellwig (2003: 629–630) for an explicit suggestion of a lexical rule for the analysis of the passive in English.

Note that *der Weltmeister* 'the world champion' is not an argument of the passive auxiliary *wird* 'is' in Engel's analysis. This means that subject–verb agreement cannot be

<sup>10</sup>Tim Osborne (p. c. 2015) reminds me that this is not true in all cases: for instance non-predicative prepositions are not reflected in f-structures, but of course they are present in dependency graphs.

#### 11 Dependency Grammar

Figure 11.7: Analysis of [*dass*] *der Weltmeister geschlagen wird* 'that the world champion is (being) beaten' parallel to the analyses provided by Engel (2014: 53–54)

determined locally and some elaborated mechanism has to be developed for ensuring agreement.<sup>11</sup> Hudson (1990), Eroms (2000: Section 5.3) and Groß & Osborne (2009) assume that subjects depend on auxiliaries rather than on the main verb. This requires some argument transfer as it is common in Categorial Grammar (see Section 8.5.2) and HPSG (Hinrichs & Nakazawa 1994a). The adapted analysis that treats the subject of the participle as a subject of the auxiliary is given in Figure 11.8 on the facing page.

<sup>11</sup>This problem would get even more pressing for cases of the so-called remote passive:


Here the object of *zu reparieren*, which is the object of a verb which is embedded two levels deep, agrees with the auxiliaries *wurde* 'was' and *wurden* 'were'. However, the question how to analyze these remote passives is open in Engel's system anyway and the solution of this problem would probably involve the mechanism applied in HPSG: the arguments of *zu reparieren* are raised to the governing verb *versucht*, passive applies to this verb and turns the object into a subject which is then raised by the auxiliary. This explains the agreement between the underlying object of *zu reparieren* 'to repair' and *wurde* 'was'. Hudson (1997), working in the framework of Word Grammar, suggests an analysis of verbal complementation in German that involves what he calls *generalized raising*. He assumes that both subjects and complements may be raised to the governing head. Note that such an analysis involving generalized raising would make an analysis of sentences like (i) straightforward, since the object would depend on the same head as the subject, namely on *hat* 'has' and hence can be placed before the subject.

(ii) Gestern yesterday hat has sich self der the Spieler player verletzt. injured 'The player injured himself yesterday.'

For a discussion of Groß & Osborne's account of (ii) see page 598.

Figure 11.8: Analysis of [*dass*] *der Weltmeister geschlagen wird* 'that the world champion is (being) beaten' with the subject as dependent of the auxiliary

# **11.3 Verb position**

In many Dependency Grammar publications on German, linearization issues are not dealt with and authors just focus on the dependency relations. The dependency relations between a verb and its arguments are basically the same in verb-initial and verbfinal sentences. If we compare the dependency graphs of the sentences in (7) given in the Figures 11.9 and 11.10, we see that only the position of the verb is different, but the dependency relations are the same, as it should be.<sup>12</sup>

	- b. Kennt knows jeder everybody diesen this Mann? man 'Does everybody know this man?'

The correct ordering of the verb with respect to its arguments and adjuncts is ensured by linearization constraints that refer to the respective topological fields. See Section 11.1.3 and Section 11.7.1 for further details on linearization.

<sup>12</sup>Eroms (2000) uses the part of speech Pron for pronouns like *jeder* 'everybody'. If information about part of speech plays a role in selection, this makes necessary a disjunctive specification of all valence frames of heads that govern nominal expressions, since they can either combine with an NP with internal structure or with a pronoun. By assigning pronouns the category N, such a disjunctive specification is avoided. A pronoun differs from a noun in its valence (it is fully saturated, while a noun needs a determiner), but not in its part of speech. Eroms & Heringer (2003: 259) use the symbol N\_pro for pronouns. If the pro-part is to be understood as a special property of items with the part of speech N, this is compatible with what I have said above: heads could then select for Ns. If N\_pro and N are assumed to be distinct atomic symbols, the problem remains.

Using N rather than Pron as part of speech for pronouns is standard in other versions of Dependency Grammar, as for instance Word Grammar (Hudson 1990: 167; Hudson 2007: 190). See also footnote 2 on page 53 on the distinction of pronouns and NPs in phrase structure grammars.

#### 11 Dependency Grammar

Figure 11.9: Analysis of [*dass*] *jeder diesen Mann kennt* 'that everybody knows this man'

Figure 11.10: Analysis of *Kennt jeder diesen Mann?* 'Does everybody know this man?'

# **11.4 Local reordering**

The situation regarding local reordering is the same. The dependency relations of the sentence in (8b) are shown in Figure 11.11 on the next page. The analysis of the sentence with normal order in (8a) has already been given in Figure 11.9.

	- b. [dass] that diesen this Mann man jeder everybody kennt knows 'that everybody knows this man'

Figure 11.11: Analysis of [*dass*] *diesen Mann jeder kennt* 'that everybody knows this man'

Figure 11.12: Analysis of *Diesen Mann kennt jeder.* 'This man, everybody knows.' without special treatment of fronting

# **11.5 Long-distance dependencies**

There are several possibilities to analyze nonlocal dependencies in Dependency Grammar. The easiest one is the one we have already seen in the previous sections. Many analyses just focus on the dependency relations and assume that the order with the verb in second position is just one of the possible linearization variants (Eroms 2000: Section 9.6.2; Groß & Osborne 2009). Figure 11.12 shows the analysis of (9):

(9) [Diesen this Mann] man kennt knows jeder. everybody 'Everyone knows this man.'

Now, this is the simplest case, so let us look at the example in (10), which really involves a *nonlocal* dependency:

#### 11 Dependency Grammar

(10) Wen who.acc glaubst believe.2sg du, you.nom daß that ich I.nom \_ gesehen seen habe?<sup>13</sup> have 'Who do you think I saw?'

The dependency relations are depicted in Figure 11.13. This graph differs from most

Figure 11.13: Non-projective analysis of *Wen glaubst du, dass ich gesehen habe?* 'Who do you think I saw?'

graphs we have seen before in not being projective. This means that there are crossing lines: the connection between Vprt and the N for *wen* 'who' crosses the lines connecting *glaubst* 'believe' and *du* 'you' with their category symbols. Depending on the version of Dependency Grammar assumed, this is seen as a problem or it is not. Let us explore the two options: if discontinuity of the type shown in Figure 11.13 is allowed for as in Heringer's and Eroms' grammars (Heringer 1996: 261; Eroms 2000: Section 9.6.2),14there has to be something in the grammar that excludes discontinuities that are ungrammatical. For instance, an analysis of (11) as in Figure 11.14 on the next page should be excluded.

(11) \* Wen who.acc glaubst believe.2sg ich I.nom du, you.nom dass that gesehen seen habe? have Intended: 'Who do you think I saw?'

Note that the order of elements in (11) is compatible with statements that refer to topological fields as suggested by Engel (2014: 50): there is a *Vorfeld* filled by *wen* 'who', there is a left sentence bracket filled by *glaubst* 'believe', and there is a *Mittelfeld* filled by *ich* 'I', *du* 'you' and the clausal argument. Having pronouns like *ich* and *du* in the *Mittelfeld* is

<sup>13</sup>Scherpenisse (1986: 84).

<sup>14</sup>However, the authors mention the possibility of raising an extracted element to a higher node. See for instance Eroms & Heringer (2003: 260).

Figure 11.14: Unwanted dependency graph of \* *Wen glaubst ich du, dass gesehen habe?* 'Who do you think I saw?'

perfectly normal. The problem is that these two pronouns come from different clauses: *du* belongs to the matrix verb *glaubst* 'believe' while *ich* 'I' depends on *gesehen* 'seen' *habe* 'have'. What has to be covered by a theory is the fact that fronting and extraposition target the left-most and right-most positions of a clause, respectively. This can be modeled in constituency-based approaches in a straightforward way, as has been shown in the previous chapters.

As an alternative to discontinuous constituents, one could assume additional mechanisms that promote the dependency of an embedded head to a higher head in the structure. Such an analysis was suggested by Kunze (1968), Hudson (1997, 2000), Kahane (1997), Kahane et al. (1998), and Groß & Osborne (2009). In what follows, I use the analysis by Groß & Osborne (2009) as an example for such analyses. Groß & Osborne depict the reorganized dependencies with a dashed line as in Figure 11.15.15,<sup>16</sup> The origin of the dependency (Vprt) is marked with a *g* and the dependent is connected to the node to which it has risen (the topmost V) by a dashed line. Instead of realizing the accusative dependent of *gesehen* 'seen' locally, information about the missing element is transferred to a higher node and realized there.

The analysis of Groß & Osborne (2009) is not very precise. There is a and there is a dashed line, but sentences may involve multiple nonlocal dependencies. In (12) for instance, there is a nonlocal dependency in the relative clauses *den wir alle begrüßt haben*

<sup>15</sup>Eroms & Heringer (2003: 260) make a similar suggestion but do not provide any formal details.

<sup>16</sup>Note that Groß & Osborne (2009) do not assume a uniform analysis of simple and complex V2 sentences. That is, for cases that can be explained as local reordering they assume an analysis without rising. Their analysis of (9) is the one depicted in Figure 11.12. This leads to problems which will be discussed in Section 11.7.1.

Figure 11.15: Projective analysis of *Wen glaubst du, dass ich gesehen habe?* 'Who do you think I saw?' involving rising

'who we all greeted have' and *die noch niemand hier gesehen hat* 'who yet nobody here seen has': the relative pronouns are fronted inside the relative clauses. The phrase *dem Mann, den wir alle kennen* 'the man who we all know' is the fronted dative object of *gegeben* 'given' and *die noch niemand hier gesehen hat* 'who yet nobody here seen has' is extraposed from the NP headed by *Frau* 'woman'.

(12) Dem the Mann, man den who wir we alle all begrüßt greeted haben, have hat has die the Frau woman das the Buch book gegeben, given die who noch yet niemand nobody hier here gesehen seen hat. has

'The woman who nobody ever saw here gave the book to the man, who all of us greeted.'

So this means that the connections (dependencies) between the head and the dislocated element have to be made explicit. This is what Hudson (1997, 2000) does in his Word Grammar analysis of nonlocal dependencies: in addition to dependencies that relate a word to its subject, object and so on, he assumes further dependencies for extracted elements. For example, *wen* 'who' in (10) – repeated here as (13) for convenience – is the object of *gesehen* 'seen' and the extractee of *glaubst* 'believe' and *dass* 'that':

(13) Wen who glaubst believe du, you dass that ich I gesehen seen habe? have 'Who do you believe that I saw?'

Hudson states that the use of multiple dependencies in Word Grammar corresponds to structure sharing in HPSG (Hudson 1997: 15). Nonlocal dependencies are modeled as a series of local dependencies as it is done in GPSG and HPSG. This is important since it allows one to capture extraction path marking effects (Bouma, Malouf & Sag 2001: 1–2, Section 3.2): for instance, there are languages that use a special form of the complementizer for sentences from which an element is extracted. Figure 11.16 shows the analysis of (13) in Word Grammar. The links above the words are the usual dependency

Figure 11.16: Projective analysis of *Wen glaubst du, dass ich gesehen habe?* 'Who do you think I saw?' in Word Grammar involving multiple dependencies

links for subjects (s) and objects (o) and other arguments (r is an abbreviation for *sharer*, which refers to verbal complements, l stands for *clausal complement*) and the links below the words are links for extractees (x<). The link from *gesehen* 'seen' to *wen* 'who' is special since it is both an object link and an extraction link (x<o). This link is an explicit statement which corresponds to both the little and the N that is marked by the dashed line in Figure 11.15. In addition to what is there in Figure 11.15, Figure 11.16 also has an extraction link from *dass* 'that' to *wen* 'who'. One could use the graphic representation of Engel, Eroms, and Gross & Osborne to display the Word Grammar dependencies: one would simply add dashed lines from the V node and from the Subjunction node to the N node dominating *wen* 'who'.

While this looks simple, I want to add that Word Grammar employs further principles that have to be fulfilled by well-formed structures. In the following I explain the *Notangling Principle*, the *No-dangling Principle* and the *Sentence-root Principle*.

**Principle 1 (The No-tangling Principle)** *Dependency arrows must not tangle.*

**Principle 2 (The No-dangling Principle)** *Every word must have a parent.*

**Principle 3 (The Sentence-root Principle)** *In every non-compound sentence there is just one word whose parent is not a word but a contextual element.*

The No-tangling Principle ensures that there are no crossing dependency lines, that is, it ensures that structures are projective (Hudson 2000: 23). Since non-local dependency relations are established via the specific dependency mechanism, one wants to rule out the non-projective analysis. This principle also rules out (14b), where *green* depends on *peas* but is not adjacent to *peas*. Since *on* selects *peas* the arrow from *on* to *peas* would cross the one from *peas* to *green*.

	- b. \* He lives green on peas.

The No-dangling Principle makes sure that there are no isolated word groups that are not connected to the main part of the structure. Without this principle (14b) could be analyzed with the isolated word *green* (Hudson 2000: 23).

The Sentence-root Principle is needed to rule out structures with more than one highest element. *glaubst* 'believe' is the root in Figure 11.16. There is no other word that dominates it and selects for it. The principle makes sure that there is no other root. So the principle rules out situations in which all elements in a phrase are roots, since otherwise the No-dangling Principle would lose its force as it could be fulfilled trivially (Hudson 2000: 25).

I added this rather complicated set of principles here in order to get a fair comparison with phrase structure-based proposals. If continuity is assumed for phrases in general, the three principles do not have to be stipulated. So, for example, LFG and HPSG do not need these three principles.

Note that Hudson (1997: 16) assumes that the element in the *Vorfeld* is extracted even for simple sentences like (9). I will show in Section 11.7.1 why I think that this analysis has to be preferred over analyses assuming that simple sentences like (9) are just order variants of corresponding verb-initial or verb-final sentences.

# **11.6 New developments and theoretical variants**

This section and the following one are for advanced readers. The present section mainly deals with Tesnière's variant of Dependency Grammar. Section 11.6.1 deals with Tesnière's part of speech system and Section 11.6.2 describes the modes of combinations of linguistics objects assumed by Tesnière. Reading this section is not necessary for a basic understanding of Dependency Grammar.

# **11.6.1 Tesnière's part of speech classification**

As mentioned in the introduction, Tesnière is a central figure in the history of Dependency Grammar as it was him who developed the first formal model (Tesnière 1959, 1980, 2015). There are many versions of Dependency Grammar today and most of them use the part of speech labels that are used in other theories as well (N, P, A, V, Adv, Conj, …). Tesnière had a system of four major categories: noun, verb, adjective, and adverb. The labels for these categories were derived from the endings that are used in Esperanto, that is, they are O, I, A, and E, respectively. These categories were defined semantically as


Table 11.1: Semantically motivated part of speech classification by Tesnière

specified in Table 11.1.<sup>17</sup> Tesnière assumed these categories to be universal and suggested that there are constraints in which way these categories may depend on others.

According to Tesnière, nouns and adverbs may depend on verbs, adjectives may depend on nouns, and adverbs may depend on adjectives or adverbs. This situation is depicted in the general dependency graph in Figure 11.17. The '\*' means that there can be an arbitrary number of dependencies between Es. It is of course easy to find examples

Figure 11.17: Universal configuration for dependencies according to Tesnière (I = verb, O = noun, A = adjective, E = adverb)

in which adjectives depend on verbs and sentences (verbs) depend on nouns. Such cases are handled via so-called *transfers* in Tesnière's system. Furthermore, conjunctions, determiners, and prepositions are missing from this set of categories. For the combination of these elements with their dependents Tesnière used special combinatoric relations: junction and transfer. We will deal with these in the following subsection.

<sup>17</sup>As Weber (1997: 77) points out this categorization is not without problems: in what sense is *Angst* 'fear' a substance? Why should *glauben* 'believe' be a concrete process? See also Klein (1971: Section 3.4) for the discussion of *schlagen* 'to beat' and *Schlag* 'the beat' and similar cases. Even if one assumes that *Schlag* is derived from the concrete process *schlag*- by a transfer into the category O, the assumption that such Os stand for concrete substances is questionable.

## **11.6.2 Connection, junction, and transfer**

Tesnière (1959) suggested three basic relations between nodes: connection, junction, and transfer. Connection is the simple relation between a head and its dependents that we have already covered in the previous sections. Junction is a special relation that plays a role in the analysis of coordination and transfer is a tool that allows one to change the category of a lexical item or a phrase.

#### **11.6.2.1 Junction**

Figure 11.18 illustrates the junction relation: the two conjuncts *John* and *Mary* are connected with the conjunction *and*. It is interesting to note that both of the conjuncts are connected to the head *laugh*.

Figure 11.18: Analysis of coordination using the special relation *junction*

In the case of two coordinated nouns we get dependency graphs like the one in Figure 11.19. Both nouns are connected to the dominating verb and both nouns dominate the same determiner.

Figure 11.19: Analysis of coordination using the special relation *junction*

An alternative to such a special treatment of coordination would be to treat the conjunction as the head and the conjuncts as its dependents.<sup>18</sup> The only problem of such a proposal would be the category of the conjunction. It cannot be Conj since the governing

<sup>18</sup>I did not use Tesnière's category labels here to spare the reader the work of translating I to V and O to N.

verb does not select a Conj, but an N. The trick that could be applied here is basically the same trick as in Categorial Grammar (see Section 21.6.2): the category of the conjunction in Categorial Grammar is (X\X)/X. We have a functor that takes two arguments of the same category and the result of the combination is an object that has the same category as the two arguments. Translating this approach to Dependency Grammar, one would get an analysis as the one depicted in Figure 11.20 rather than the ones in Figure 11.18 and Figure 11.19. The figure for *all girls and boys* looks rather strange since both the

Figure 11.20: Analysis of coordination without *junction* and the conjunction as head

determiner and the two conjuncts depend on the conjunction, but since the two Ns are selecting a Det, the same is true for the result of the coordination. In Categorial Grammar notation, the category of the conjunction would be ((NP\Det)\(NP\Det))/(NP\Det) since X is instantiated by the nouns which would have the category (NP\Det) in an analysis in which the noun is the head and the determiner is the dependent.

Note that both approaches have to come up with an explanation of subject–verb agreement. Tesnière's original analysis assumes two dependencies between the verb and the individual conjuncts.<sup>19</sup> As the conjuncts are singular and the verb is plural, agreement cannot be modeled in tandem with dependency relations in this approach. If the second analysis finds ways of specifying the agreement properties of the coordination in the conjunction, the agreement facts can be accounted for without problems.

The alternative to a headed approach as depicted in Figure 11.20 is an unheaded one. Several authors working in phrase structure-based frameworks suggested analyses of coordination without a head. Such analyses are also assumed in Dependency Grammar (Hudson 1988, Kahane 1997). Hudson (1988) and others who make similar assumptions assume a phrase structure component for coordination: the two nouns and the conjunction are combined to form a larger object which has properties which do not correspond to the properties of any of the combined words.

Similarly, the junction-based analysis of coordination poses problems for the interpretation of the representations. If semantic role assignment happens in parallel to dependency relations, there would be a problem with graphs like the one in Figure 11.18, since

<sup>19</sup>Eroms (2000: 467) notes the agreement problem and describes the facts. In his analysis, he connects the first conjunct to the governing head, although it seems to be more appropriate to assume an internally structured coordination structure and then connect the highest conjunction.

#### 11 Dependency Grammar

the semantic role of *laugh* cannot be filled by *John* and *Mary* simultaneously. Rather it is filled by one entity, namely the one that refers to the set containing John and Mary. This semantic representation would belong to the phrase *John and Mary* and the natural candidate for being the topmost entity in this coordination is *and*, as it embeds the meaning of *John* and the meaning of *Mary*: *and*′ (*John*′ , *Mary*′ ).

Such junctions are also assumed for the coordination of verbs. This is, however, not without problems, since adjuncts can have scope over the conjunct that is closest to them or over the whole coordination. An example is the following sentence from Levine (2003: 217):

(15) Robin came in, found a chair, sat down, and whipped off her logging boots in exactly thirty seconds flat.

The adjunct *in exactly thirty seconds flat* can refer either to *whipped off her logging boots* as in (16a) or scope over all three conjuncts together as in (16b):

	- b. Robin [[came in, found a chair, sat down, and pulled off her logging boots] in exactly thirty seconds flat].

The Tesnièreian analysis in Figure 11.21 corresponds to (17), while an analysis that treats the conjunction as the head as in Figure 11.22 on the next page corresponds to (16b).

(17) Robin came in in exactly thirty seconds flat and Robin found a chair in exactly thirty seconds flat and Robin pulled off her logging boots in exactly thirty seconds flat.

The reading in (17) results when an adjunct refers to each conjunct individually rather then referring to a cumulative event that is expressed by a verb phrase as in (16b).

Figure 11.21: Analysis of verb coordination involving the junction relation

Figure 11.22: Analysis of verb coordination involving the connection relation

Levine (2003: 217) discusses these sentences in connection to the HPSG analysis of extraction by Bouma, Malouf & Sag (2001). Bouma, Malouf & Sag suggest an analysis in which adjuncts are introduced lexically as dependents of a certain head. Since adjuncts are introduced lexically, the coordination structures basically have the same structure as the ones assumed in a Tesnièreian analysis. It may be possible to come up with a way to get the semantic composition right even though the syntax does not correspond to the semantic dependencies (see Chaves 2009 for suggestions), but it is clear that it is simpler to derive the semantics from a syntactic structure which corresponds to what is going on in semantics.

#### **11.6.2.2 Transfer**

Transfers are used in Tesnière's system for the combination of words or phrases with a head of one of the major categories (for instance nouns) with words in minor categories (for instance prepositions). In addition, transfers can transfer a word or phrase into another category without any other word participating.

Figure 11.23 shows an example of a transfer. The preposition *in* causes a category change: while *Traumboot* 'dream boat' is an O (noun), the combination of the preposition and the noun is an E. The example shows that Tesnière used the grammatical category to encode grammatical functions. In theories like HPSG there is a clear distinction: there is information about part of speech on the one hand and the function of elements as modifiers and predicates on the other hand. The modifier function is encoded by the selectional feature mod, which is independent of the part of speech. It is therefore possible to have modifying and non-modifying adjectives, modifying and non-modifying prepositional phrases, modifying and non-modifying noun phrases and so on. For the example at hand, one would assume a preposition with directional semantics that selects for an NP. The preposition is the head of a PP with a filled mod value.

#### 11 Dependency Grammar

Figure 11.23: Transfer with an example adapted from Weber (1997: 83)

Another area in which transfer is used is morphology. For instance, the derivation of French *frappant* 'striking' by suffixation of -*ant* to the verb stem *frapp* is shown in Figure 11.24. Such transfers can be subsumed under the general connection relation if the

Figure 11.24: Transfer in morphology and its reconceptualization as normal dependency

affix is treated as the head. Morphologists working in realizational morphology and construction morphology argue against such morpheme-based analyses since they involve a lot of empty elements for conversions as for instance the conversion of the verb *play* into the noun *play* (see Figure 11.25). Consequently, lexical rules are assumed for derivations and conversions in theories like HPSG. HPSG lexical rules are basically equivalent to unary branching rules (see the discussion of (41) on page 292 and Section 19.5). The affixes are integrated into the lexical rules or into realization functions that specify the morphological form of the item that is licensed by the lexical rule.

Figure 11.25: Conversion as transfer from I (verb) to O (substantive) and as dependency with an empty element of the category N as head

Concluding it can be said that transfer corresponds to


For further discussion of the relation between Tesnière's transfer rules and constituency rules see Kahane & Osborne (2015: Section 4.9.1–4.9.2). Kahane & Osborne point out that transfer rules can be used to model exocentric constructions, that is, constructions in which there is no single part that could be identified as the head. For more on headless constructions see Section 11.7.2.4.

# **11.6.3 Scope**

As Kahane & Osborne (2015: lix) point out, Tesnière uses so-called polygraphs to represent scopal relations. So, since *that you saw yesterday* in (18) refers to *red cars* rather than *cars* alone, this is represented by a line that starts at the connection between *red* and *cars* rather than on one of the individual elements (Tesnière 2015: 150, Stemma 149).

(18) red cars that you saw yesterday

Tesnière's analysis is depicted in the left representation in Figure 11.26. It is worth noting that this representation corresponds to the phrase structure tree on the right of Figure 11.26. The combination B between *red* and *cars* corresponds to the B node in the righthand figure and the combination A of *red cars* and *that you saw yesterday* corresponds to the A node. So, what is made explicit and is assigned a name in phrase structure grammars remains nameless in Tesnière's analysis, but due to the assumption of polygraphs, it is possible to refer to the combinations. See also the discussion of Figure 11.46, which shows additional nodes that Hudson assumes in order to model semantic relations.

Figure 11.26: Tesnière's way of representing scope and the comparison with phrase structure-based analyses by Kahane & Osborne (2015: lix)

# **11.7 Summary and classification**

This section is for advanced readers. It compares Dependency Grammar to phrase structure grammars.

Proponents of Dependency Grammar emphasize the point that Dependency Grammar is much simpler than phrase structure grammars, since there are fewer nodes and the general concept is more easy to grasp (see for instance Osborne 2014: Section 3.2, Section 7). This is indeed true: Dependency Grammar is well-suited for teaching grammar in introductory classes. However, as Sternefeld & Richter (2012: 285) point out in a rather general discussion, simple syntax has the price of complex semantics and vice versa. So, in addition to the dependency structure that is described in Dependency Syntax, one needs other levels. One level is the level of semantics and another one is linearization. As far as linearization is concerned, Dependency Grammar has two options: assuming continuous constituents, that is, projective structures, or allowing for discontinuous constituents. These options will be discussed in the following subsections. Section 11.7.2 compares dependency grammars with phrase structure grammars and shows that projective Dependency Grammars can be translated into phrase structure grammars. It also shows that non-projective structures can be modeled in theories like HPSG. The integration of semantics is discussed in Section 11.7.2.3 and it will become clear that once other levels are taken into account, Dependency Grammars are not necessarily simpler than phrase structure grammars.

## **11.7.1 Linearization**

We have seen several approaches to linearization in this chapter. Many just assume a dependency graph and some linearization according to the topological fields model. As has been argued in Section 11.5, allowing discontinuous serialization of a head and its dependents opens up Pandora's box. I have discussed the analysis of nonlocal dependencies by Kunze (1968), Hudson (1997, 2000), Kahane, Nasr & Rambow (1998), and Groß & Osborne (2009). With the exception of Hudson those authors assume that dependents of a head rise to a dominating head only in those cases in which a discontinuity would arise otherwise. However, there seems to be a reason to assume that fronting should be

treated by special mechanisms even in cases that allow for continuous serialization. For instance, the ambiguity or lack of ambiguity of the examples in (19) cannot be explained in a straightforward way:<sup>20</sup>

	- b. dass that er he das the Buch book nicht not oft often liest reads 'It is not the case that he reads the book often.'
	- c. dass that er he das the Buch book oft often nicht not liest reads 'It is often that he does not read the book.'

The point about the three examples is that only (19a) is ambiguous. Even though (19c) has the same order as far as *oft* 'often' and *nicht* 'not' are concerned, the sentence is not ambiguous. So it is the fronting of an adjunct that is the reason for the ambiguity. The dependency graph for (19a) is shown in Figure 11.27. Of course the dependencies

Figure 11.27: Dependency graph for *Oft liest er das Buch nicht.* 'He does not read the book often.'

for (19b) and (19c) do not differ. The graphs would be the same, only differing in serialization. Therefore, differences in scope could not be derived from the dependencies and complicated statements like (20) would be necessary:

(20) If a dependent is linearized in the *Vorfeld* it can both scope over and under all other adjuncts of the head it is a dependent of.

Eroms (1985: 320) proposes an analysis of negation in which the negation is treated as the head; that is, the sentence in (21) has the structure in Figure 11.28.<sup>21</sup>

<sup>20</sup>See Lötscher (1985: 208–209) for a discussion of similar examples making the same point.

<sup>21</sup>But see Eroms (2000: Section 11.2.3).

#### 11 Dependency Grammar

Figure 11.28: Analysis of negation according to Eroms (1985: 320)

(21) Er he kommt comes nicht. not 'He does not come.'

This analysis is equivalent to analyses in the Minimalist Program assuming a NegP and it has the same problem: the category of the whole object is Adv, but it should be V. This is a problem since higher predicates may select for a V rather than an Adv.<sup>22</sup>

The same is true for constituent negation or other scope bearing elements. For example, the analysis of (22) would have to be the one in Figure 11.29.

(22) der the angebliche alleged Mörder murderer

Figure 11.29: Analysis that would result if one considered all scope-bearing adjuncts to be heads

This structure would have the additional problem of being non-projective. Eroms does treat the determiner differently from what is assumed here, so this type of non-projectiv-

<sup>22</sup>See for instance the analysis of embedded sentences like (23) below.

ity may not be a problem for him. However, the head analysis of negation would result in non-projectivity in so-called coherent constructions in German. The sentence in (23) has two readings: in the first reading, the negation scopes over *singen* 'sing' and in the second one over *singen darf* 'sing may'.

(23) dass that er he nicht not singen sing darf may 'that he is not allowed to sing' or 'that he is allowed not to sing'

The reading in which *nicht* 'not' scopes over the whole verbal complex would result in the non-projective structure that is given in Figure 11.30. Eroms also considers an

Figure 11.30: Analysis that results if one assumes the negation to be a head

analysis in which the negation is a word part ('Wortteiläquivalent'). This does, however, not help here since first the negation and the verb are not adjacent in V2 contexts like (19a) and even in verb-final contexts like (23). Eroms would have to assume that the object to which the negation attaches is the whole verbal complex *singen darf* 'sing may', that is, a complex object consisting of two words.

This leaves us with the analysis provided in Figure 11.27 and hence with a problem since we have one structure with two possible adjunct realizations that correspond to different readings. This is not predicted by an analysis that treats the two possible linearizations simply as alternative orderings.

Thomas Groß (p. c. 2013) suggested an analysis in which *oft* does not depend on the verb but on the negation. This corresponds to constituent negation in phrase structure approaches. The dependency graph is shown on the left-hand side in Figure 11.31. The figure on the right-hand side shows the graph for the corresponding verb-final sentence. The reading corresponding to constituent negation can be illustrated with contrastive expressions. While in (24a) it is only *oft* 'often' which is negated, it is *oft gelesen* 'often read' that is in the scope of negation in (24b).

Figure 11.31: Dependency graph for *Oft liest er das Buch nicht.* 'He does not read the book often.' according to Groß and verb-final variant

Figure 11.32: Possible syntactic analyses for *er das Buch nicht oft liest* 'he does not read the book often'

	- b. Er he hat has das the Buch book nicht not oft often gelesen, read sondern but selten seldom gekauft. bought 'He did not read the book often but rather bought it seldom.'

These two readings correspond to the two phrase structure trees in Figure 11.32. Note that in an HPSG analysis, the adverb *oft* would be the head of the phrase *nicht oft* 'not often'. This is different from the Dependency Grammar analysis suggested by Groß.

Figure 11.33: Dependency graph for *Dem Saft eine kräftige Farbe geben Blutorangen.* 'Blood oranges give the juice a strong color.'

Furthermore, the Dependency Grammar analysis has two structures: a flat one with all adverbs depending on the same verb and one in which *oft* depends on the negation. The phrase structure-based analysis has three structures: one with the order *oft* before *nicht*, one with the order *nicht* before *oft* and the one with direct combination of *nicht* and *oft*. The point about the example in (19a) is that one of the first two structures is missing in the Dependency Grammar representations. This probably does not make it impossible to derive the semantics, but it is more difficult than it is in constituent-based approaches.

Furthermore, note that models that directly relate dependency graphs to topological fields will not be able to account for sentences like (25).

(25) Dem the Saft juice eine a kräftige strong Farbe color geben give Blutorangen.<sup>23</sup> blood.oranges 'Blood oranges give a strong color to the juice.'

The dependency graph of this sentence is given in Figure 11.33.

Such apparent multiple frontings are not restricted to NPs. Various types of dependents can be placed in the *Vorfeld*. An extensive discussion of the data is provided in Müller (2003a). Additional data have been collected in a research project on multiple frontings and information structure (Bildhauer 2011). Any theory based on dependencies alone and not allowing for empty elements is forced to give up the restriction commonly assumed in the analysis of V2 languages, namely that the verb is in second position. In comparison, analyses like GB and those HPSG variants that assume an empty verbal head can assume that a projection of such a verbal head occupies the *Vorfeld*. This explains why the material in the *Vorfeld* behaves like a verbal projection containing a visible verb: such *Vorfelds* are internally structured topologically. They may have a filled *Nachfeld* and even a particle that fills the right sentence bracket. See Müller (2005c, 2023a) for further data, discussion, and a detailed analysis. The equivalent of the analysis in

<sup>23</sup>Bildhauer & Cook (2010) found this example in the *Deutsches Referenzkorpus* (DeReKo), hosted at Institut für Deutsche Sprache, Mannheim: http://www.ids-mannheim.de/kl/projekte/korpora, 2018-02-20.

#### 11 Dependency Grammar

Figure 11.34: Dependency graph for *Dem Saft eine kräftige Farbe geben Blutorangen.* 'Blood oranges give the juice a strong color.' with an empty verbal head for the *Vorfeld*

Gross & Osborne's framework (2009) would be something like the graph that is shown in Figure 11.34, but note that Groß & Osborne (2009: 73) explicitly reject empty elements, and in any case an empty element which is stipulated just to get the multiple fronting cases right would be entirely ad hoc.<sup>24</sup> It is important to note that the issue is not solved by simply dropping the V2 constraint and allowing dependents of the finite verb to be realized to its left, since the fronted constituents do not necessarily depend on the finite verb as the examples in (26) show:

	- b. [Kurz] briefly [die the Bestzeit] best.time hatte had der the Berliner Berliner Andreas Andreas Klöden Klöden […] gehalten.<sup>26</sup> held 'Andreas Klöden from Berlin had briefly held the record time.'

And although the respective structures are marked, such multiple frontings can even cross clause boundaries:

<sup>24</sup>I stipulated such an empty element in a linearization-based variant of HPSG allowing for discontinuous constituents (Müller 2002b), but later modified this analysis so that only continuous constituents are allowed, verb position is treated as head-movement and multiple frontings involve the same empty verbal head as is used in the verb movement analysis (Müller 2005c, 2023a).

<sup>25</sup>taz, 07.07.1999, p. 18. Quoted from Müller (2002b).

<sup>26</sup>Märkische Oderzeitung, 28./29.07.2001, p. 28.

(27) Der the.dat Maria Maria einen a.acc Ring ring glaube believe ich I nicht, not daß that er he je ever schenken give wird.<sup>27</sup> will 'I don't think that he would ever give Maria a ring.'

If such dependencies are permitted it is really difficult to constrain them. The details cannot be discussed here, but the reader is referred to Müller (2005c, 2023a).

Note also that Engel's statement regarding the linear order in German sentences (2014: 50) referring to one element in front of the finite verb (see footnote 7) is very imprecise. One can only guess what is intended by the word *element*. One interpretation is that it is a continuous constituent in the classical sense of constituency-based grammars. An alternative would be that there is a continuous realization of a head and some but not necessarily all of its dependents. This alternative would allow an analysis of extraposition with discontinuous constituents of (28) as it is depicted in Figure 11.35.

(28) Ein a junger young Kerl guy stand stood da, there mit with langen long blonden blond Haaren, hair die that sein his Gesicht face einrahmten, framed […]<sup>28</sup>

'A young guy was standing there with long blond hair that framed his face'

Figure 11.35: Dependency graph for *Ein junger Kerl stand da, mit langen blonden Haaren.* 'A young guy was standing there with long blond hair.' with a discontinuous constituent in the *Vorfeld*

<sup>27</sup>Fanselow (1993: 67).

<sup>28</sup>Charles Bukowski, *Der Mann mit der Ledertasche*. München: Deutscher Taschenbuch Verlag, 1994, p. 201, translation by Hans Hermann.

A formalization of such an analysis is not trivial, since one has to be precise about what exactly can be realized discontinuously and which parts of a dependency must be realized continuously. Kathol & Pollard (1995) developed such an analysis of extraposition in the framework of HPSG. See also Müller (1999b: Section 13.3). I discuss the basic mechanisms for such linearization analyses in HPSG in the following section.

# **11.7.2 Dependency Grammar vs. phrase structure grammar**

This section deals with the relation between Dependency Grammars and phrase structure grammars. I first show that projective Dependency Grammars can be translated into phrase structure grammars (Section 11.7.2.1). I will then deal with non-projective DGs and show how they can be captured in linearization-based HPSG (Section 11.7.2.2). Section 11.7.2.3 argues for the additional nodes that are assumed in phrase structurebased theories and Section 11.7.2.4 discusses headless constructions, which pose a problem for all Dependency Grammar accounts.

## **11.7.2.1 Translating projective Dependency Grammars into phrase structure grammars**

As noted by Gaifman (1965), Covington (1990: 234), Oliva (2003) and Hellwig (2006: 1093), certain projective headed phrase structure grammars can be turned into Dependency Grammars by moving the head one level up to replace the dominating node. So in an NP structure, the N is shifted into the position of the NP and all other connections remain the same. Figure 11.36 illustrates. Of course this procedure cannot be applied to all phrase

Figure 11.36: *a book* in a phrase structure and a Dependency Grammar analysis

structure grammars directly since some involve more elaborate structure. For instance, the rule S → NP, VP cannot be translated into a dependency rule, since NP and VP are both complex categories.

In what follows, I want to show how the dependency graph in Figure 11.1 can be recast as headed phrase structure rules that license a similar tree, namely the one in Figure 11.37. I did not use the labels NP and VP to keep the two figures maximally similar. The P part of NP and VP refers to the saturation of a projection and is often ignored in figures. See Chapter 9 on HPSG, for example. The grammar that licenses the tree is given in (29), again ignoring valence information.

Figure 11.37: Analysis of *The child reads a book.* in a phrase structure with flat rules


If one replaces the N and V in the right-hand side of the two left-most rules in (29) with the respective lexical items and then removes the rules that license the words, one arrives at the lexicalized variant of the grammar given in (30):


*Lexicalized* means that every partial tree licensed by a grammar rule contains a lexical element. The grammar in (30) licenses exactly the tree in Figure 11.1.<sup>29</sup>

One important difference between classical phrase structure grammars and Dependency Grammars is that the phrase structure rules impose a certain order on the daughters. That is, the V rule in (30) implies that the first nominal projection, the verb, and the second nominal projection have to appear in the order stated in the rule. Of course this ordering constraint can be relaxed as it is done in GPSG. This would basically permit any order of the daughters at the right hand side of rules. This leaves us with the integration of adjuncts. Since adjuncts depend on the head as well (see Figure 11.4), a rule could be assumed that allows arbitrarily many adjuncts in addition to the arguments. So the V rule in (30) would be changed to the one in (31):<sup>30</sup>

(i) X[Y1, Y2, ~, Y3]

<sup>29</sup>As mentioned on page 373, Gaifman (1965: 305), Hays (1964: 513), Baumgärtner (1970: 57) and Heringer (1996: 37) suggest a general rule format for dependency rules that has a special marker ('\*' and '~', respectively) in place of the lexical words in (30). Heringer's rules have the form in (31):

X is the category of the head, Y1, Y2, and Y3 are dependents of the head and '~' is the position into which the head is inserted.

<sup>30</sup>See page 192 for a similar rule in GPSG and see Kasper (1994) for an HPSG analysis of German that assumes entirely flat structures and integrates an arbitrary number of adjuncts. Dahl (1980) argues that one needs "higher nodes" (N nodes and VP nodes in other terminology) for adjunct attachment for semantic reasons. I think this is not correct since – as Kasper showed – relational constraints could be used to determine complex semantic representations. I agree though that assuming these nodes makes things a lot easier. See also footnote 37.

#### 11 Dependency Grammar

(31) V → N reads N Adv\*

Such generalized phrase structures would give us the equivalent of projective Dependency Grammars.<sup>31</sup> However, as we have seen, some researchers allow for crossing edges, that is, for discontinuous constituents. In what follows, I show how such Dependency Grammars can be formalized in HPSG.

### **11.7.2.2 Non-projective Dependency Grammars and phrase structure grammars with discontinuous constituents**

The equivalent to non-projective dependency graphs are discontinuous constituents in phrase structure grammars. In what follows I want to provide one example of a phrase structure-based theory that permits discontinuous structures. Since, as I will show, discontinuities can be modeled as well, the difference between phrase structure grammars and Dependency Grammars boils down to the question of whether units of words are given a label (for instance NP) or not.

The technique that is used to model discontinuous constituents in frameworks like HPSG goes back to Mike Reape's work on German (1991, 1992, 1994). Reape uses a list called domain to represent the daughters of a sign in the order in which they appear at the surface of an utterance. (32) shows an example in which the dom value of a headedphrase is computed from the dom value of the head and the list of non-head daughters.

$$\begin{aligned} \text{(32)} \quad &\quad \textit{headed-phase} \Rightarrow \begin{bmatrix} \textit{HEAD-DTR} \boxed{\begin{array}{c} \Box\\ \text{NON-HEAD-DTRS} \end{array}} \\ \textit{DOM} \end{bmatrix} \end{aligned} $$

The symbol '⃝' stands for the *shuffle* relation. *shuffle* relates three lists A, B and C iff C contains all elements from A and B and the order of the elements in A and the order of the elements of B is preserved in C. (33) shows the combination of two sets with two elements each:

(33) ⟨ *a, b* ⟩ ⃝ ⟨ *c, d* ⟩ = ⟨ *a, b, c, d* ⟩ ∨ ⟨ *a, c, b, d* ⟩ ∨ ⟨ *a, c, d, b* ⟩ ∨ ⟨ *c, a, b, d* ⟩ ∨ ⟨ *c, a, d, b* ⟩ ∨ ⟨ *c, d, a, b* ⟩

The result is a disjunction of six lists. *a* is ordered before *b* and *c* before *d* in all of these lists, since this is also the case in the two lists ⟨ *a, b* ⟩ and ⟨ *c, d* ⟩ that have been

<sup>31</sup>Sylvain Kahane (p. c. 2015) states that binarity is important for Dependency Grammars, since there is one rule for the subject, one for the object and so on (as for instance in Kahane 2009, which is an implementation of Dependency Grammar in the HPSG formalism). However, I do not see any reason to disallow for flat structures. For instance, Ginzburg & Sag (2000: 364) assumed a flat rule for subject auxiliary inversion in HPSG. In such a flat rule the specifier/subject and the other complements are combined with the verb in one go. This would also work for more than two valence features that correspond to grammatical functions like subject, direct object, indirect object. See also footnote 29 on flat rules.

combined. But apart from this, *b* can be placed before, between or after *c* and *d*. Every word comes with a domain value that is a list that contains the word itself:

(34) Domain contribution of single words, here *gibt* 'gives':

$$
\begin{bmatrix}
\mathsf{PHON} & \left< \begin{matrix} \mathsf{gibt} \end{matrix} \right> \\
\begin{matrix} \mathsf{sYNSEM} & \dots \\ \mathsf{DOM} & \left< \begin{matrix} \Pi \end{matrix} \right> \end{matrix}
\end{bmatrix}
$$

The description in (34) may seem strange at first glance, since it is cyclic, but it can be understood as a statement saying that *gibt* contributes itself to the items that occur in linearization domains.

The constraint in (35) is responsible for the determination of the phon values of phrases:

$$\begin{array}{l} \text{(35)} \quad \textit{phras} \Longrightarrow \begin{bmatrix} \textsf{PHON} & \box{\textsf{L}} \oplus \dots \oplus \textsf{[}\box{\textsf{m}}\\\\ \textsf{DOM} & \left\langle \begin{bmatrix} \textsf{sign}\\ \textsf{PHON} & \box{\textsf{L}} \end{bmatrix}, \dots, \begin{bmatrix} \textsf{sign}\\ \textsf{PHON} & \box{\textsf{m}} \end{bmatrix} \end{array} \right\rangle \end{array}$$

It states that the phon value of a sign is the concatenation of the phon values of its domain elements. Since the order of the domain elements corresponds to their surface order, this is the obvious way to determine the phon value of the whole linguistic object.

Figure 11.38 shows how this machinery can be used to license binary branching structures with discontinuous constituents. Words or word sequences that are separated by commas stand for separate domain objects, that is, ⟨ *das, Buch* ⟩ contains the two objects *das* and *Buch* and ⟨ *das Buch, gibt* ⟩ contains the two objects *das Buch* and *gibt*. The important point to note here is that the arguments are combined with the head in the order

V[dom ⟨ *der Frau, ein Mann, das Buch, gibt* ⟩]

Figure 11.38: Analysis of *dass der Frau ein Mann das Buch gibt* 'that a man gives the woman the book' with binary branching structures and discontinuous constituents

Figure 11.39: Analysis of *dass der Frau ein Mann das Buch gibt* 'that a man gives the woman the book' with binary branching structures and discontinuous constituents showing the discontinuity

accusative, dative, nominative, although the elements in the constituent order domain are realized in the order dative, nominative, accusative rather than nominative, dative, accusative, as one would expect. This is possible since the formulation of the computation of the dom value using the shuffle operator allows for discontinuous constituents. The node for *der Frau das Buch gibt* 'the woman the book gives' is discontinuous: *ein Mann* 'a man' is inserted into the domain between *der Frau* 'the woman' and *das Buch* 'the book'. This is more obvious in Figure 11.39, which has a serialization of NPs that corresponds to their order.

Such binary branching structures were assumed for the analysis of German by Kathol (1995, 2000) and Müller (1995, 1996c, 1999b, 2002a), but as we have seen throughout this chapter, Dependency Grammar assumes flat representations (but see Footnote 31 on page 404). Schema 1 licenses structures in which all arguments of a head are realized in one go.<sup>32</sup>

### **Schema 1 (Head-Complement Schema (flat structure))**

```
head-complement-phrase ⇒

 synsem|loc|cat|comps ⟨⟩
 head-dtr|synsem|loc|cat|comps 1
 non-head-dtrs 1
```
To keep the presentation simple, I assume that the comps list contains descriptions of complete signs. Therefore the whole list can be identified with the list of non-head

<sup>32</sup>I assume here that all arguments are contained in the comps list of a lexical head, but nothing hinges on that. One could also assume several valence features and nevertheless get a flat structure. For instance, Borsley (1989: 339) suggests a schema for auxiliary inversion in English and verb-initial sentences in Welsh that refers to both the valence feature for subjects and for complements and realizes all elements in a flat structure.

daughters.<sup>33</sup> The computation of the dom value can be constrained in the following way:

$$\text{(36)}\quad head \text{-}phase \Rightarrow \begin{bmatrix} \text{HEAD-DTR} & \boxed{\Box} \\ \text{NON-HEAD-DTR} & \left\{ \boxed{\Box}, \dots, \boxed{\Box} \right\} \\ \text{DM} & \left\{ \boxed{\Box} \right\} \bigcirc \left\{ \boxed{\Box} \right\} \bigcirc \dots \bigcirc \left\{ \boxed{\Box} \right\} \end{bmatrix}$$

This constraint says that the value of dom is a list which is the result of shuffling singleton lists each containing one daughter as elements. The result of such a shuffle operation is a disjunction of all possible permutations of the daughters. This seems to be overkill for something that GPSG already gained by abstracting away from the order of the elements on the right hand side of a phrase structure rule. Note, however, that this machinery can be used to reach even freer orders: by referring to the dom values of the daughters rather than the daughters themselves, it is possible to insert individual words into the dom list.

$$\begin{array}{ll} \text{(37)} & \text{headed-phase} \Rightarrow \begin{bmatrix} \text{HEAD-DTR} \left[ \text{nom} \quad \boxed{\Box} \\ \text{NON-HEAD-DTR} \left< \left\{ \begin{array}{c} \text{nom} \left[ \boxed{\Box} \right] \dots \left[ \begin{array}{c} \text{nom} \left[ \boxed{\Box} \end{array} \right] \end{array} \right\} \right. \\ \text{[DOM} & \left\{ \left[ \boxed{\Box} \right] \bigcirc \left\{ \left[ \boxed{\Box} \right] \right\} \bigcirc \dots \bigcirc \left\{ \left[ \boxed{\Box} \right] \right\} \end{array} \end{array}$$

Using this constraint we have dom values that basically contain all the words in an utterance in any permutation. What we are left with is a pure Dependency Grammar without any constraints on projectivity. With such a grammar we could analyze the non-projecting structure of Figure 11.6 on page 376 and much more. The analysis in terms of domain union is shown in Figure 11.40. It is clear that such discontinuity is

Figure 11.40: Unwanted analysis of *dass die Frauen Türen öffnen* 'that the women open doors' using Reape-style constituent order domains

unwanted and hence one has to have restrictions that enforce continuity. One possible restriction is to require projectivity and hence equivalence to phrase structure grammars in the sense that was discussed above.

<sup>33</sup>Without this assumption one would need a relational constraint that maps a list with descriptions of type *synsem* onto a list with descriptions of type *sign*. See Meurers (1999c: 198) for details.

#### 11 Dependency Grammar

There is some dispute going on about the question of whether constituency/dependency is primary/necessary to analyze natural language: while Hudson (1980) and Engel (1996) claim that dependency is sufficient, a claim that is shared by dependency grammarians (according to Engel 1996), Leiss (2003) claims that it is not. In order to settle the issue, let us take a look at some examples:

(38) Dass that Peter Peter kommt, comes klärt resolves nicht, not ob whether Klaus Klaus spielt. plays 'That Peter comes does not resolve the question of whether Klaus will play.'

If we know the meaning of the utterance, we can assign a dependency graph to it. Let us assume that the meaning of (38) is something like (39):

(39) ¬ *resolve*′ (*that*′ (*come*′ (*Peter*′ )),*whether*′ (*play*′ (*Klaus*′ )))

With this semantic information, we can of course construct a dependency graph for (38). The reason is that the dependency relation is reflected in a bi-unique way in the semantic representation in (39). The respective graph is given in Figure 11.41. But note that this

Figure 11.41: The dependency graph of *Dass Peter kommt, klärt nicht, ob Klaus spielt.* 'That Peter comes does not resolve the question of whether Klaus plays.' can be derived from the semantic representation.

does not hold in the general case. Take for instance the example in (40):

(40) Dass that Peter Peter kommt, comes klärt resolves nicht, not ob whether Klaus Klaus kommt. comes 'That Peter comes does not resolve the question of whether Klaus comes.'

Here the word *kommt* appears twice. Without any notion of constituency or restrictions regarding adjacency, linear order and continuity, we cannot assign a dependency graph unambiguously. For instance, the graph in Figure 11.42 is perfectly compatible with the meaning of this sentence: *dass* dominates *kommt* and *kommt* dominates *Peter*, while *ob* dominates *kommt* and *kommt* dominates *Klaus*. I used the wrong *kommt* in the depen-

dency chains, but this is an issue of linearization and is independent of dependency. As soon as one takes linearization information into account, the dependency graph in Figure 11.42 is ruled out since *ob* 'whether' does not precede its verbal dependent *kommt* 'comes'. But this explanation does not work for the example in Figure 11.6. Here, all dependents are linearized correctly; it is just the discontinuity of *die* and *Türen* that is inappropriate. If it is required that *die* and *Türen* are continuous, we have basically let constituents back in (see Footnote 9 on page 376).

Similarly, non-projective analyses without any constraints regarding continuity would permit the word salad in (41b):


b. \* Deshalb therefore klärt resolves dass that ob whether Peter Peter Klaus Klaus kommt comes spielt. plays

(41b) is a variant of (41a) in which the elements of the two clausal arguments are in correct order with respect to each other, but both clauses are discontinuous in such a way that the elements of each clause alternate. The dependency graph is shown in Figure 11.43. As was explained in Section 10.6.4.4 on the analysis of nonlocal dependencies in Fluid Construction Grammar, a grammar of languages like English and German has to constrain the clauses in such a way that they are continuous with the exception of extractions to the left. A similar statement can be found in Hudson (1980: 192). Hudson also states that an item can be fronted in English, provided all of its dependents

Figure 11.43: The dependency graph of the word salad *Deshalb klärt dass ob Peter Klaus kommt spielt.* 'Therefore resolves that whether Peter Klaus comes plays' which is admitted by non-projective Dependency Grammars that do not restrict discontinuity

are fronted with it (p. 184). This "item with all its dependents" is the constituent in constituent-based grammars. The difference is that this object is not given an explicit name and is not assumed to be a separate entity containing the head and its dependents in most Dependency Grammars.<sup>34</sup>

Summing up what has been covered in this section so far, I have shown what a phrase structure grammar that corresponds to a certain Dependency Grammar looks like. I have also shown how discontinuous constituents can be allowed for. However, there are issues that remained unaddressed so far: not all properties that a certain phrase has are identical to its lexical head and the differences have to be represented somewhere. I will discuss this in the following subsection.

### **11.7.2.3 Features that are not identical between heads and projections**

As Oliva (2003) points out, the equivalence of Dependency Grammar and HPSG only holds up as far as head values are concerned. That is, the node labels in dependency graphs correspond to the head values in an HPSG. There are, however, additional features like cont for the semantics and slash for nonlocal dependencies. These values usually differ between a lexical head and its phrasal projections. For illustration, let us have a look at the phrase *a book*. The semantics of the lexical material and the complete phrase is given in (42):<sup>35</sup>

<sup>34</sup>See however Hellwig (2003) for an explicit proposal that assumes that there is a linguistic object that represents the whole constituent rather than just the lexical head.

<sup>35</sup>For lambda expressions see Section 2.3.

(42) a. *a*: ∃ ( () ∧ ()) b. *book*: (*book*′ ()) c. *a book*: ∃ (*book*′ () ∧ ())

Now, the problem for the Dependency Grammar notation is that there is no NP node that could be associated with the semantics of *a book* (see Figure 11.36 on page 402), the only thing present in the tree is a node for the lexical N: the node for *book*. <sup>36</sup> This is not a big problem, however: the lexical properties can be represented as part of the highest node as the value of a separate feature. The N node in a dependency graph would then have a cont value that corresponds to the semantic contribution of the complete phrase and a lex-cont value that corresponds to the contribution of the lexical head of the phrase. So for *a book* we would get the following representation:

$$\begin{array}{ll} \text{(43)} & \begin{bmatrix} \text{conv} & \lambda Q \exists \mathbf{x} (book'(\mathbf{x}) \land Q(\mathbf{x})) \\ \text{LEXICAL-conv} & \lambda y \ (book'(y)) \end{bmatrix} \end{array}$$

With this kind of representation one could maintain analyses in which the semantic contribution of a head together with its dependents is a function of the semantic contribution of the parts.

Now, there are probably further features in which lexical heads differ from their projections. One such feature would be slash, which is used for nonlocal dependencies in HPSG and could be used to establish the relation between the risen element and the head in an approach à la Groß & Osborne (2009). Of course we can apply the same trick again. We would then have a feature lexical-slash. But this could be improved and the features of the lexical item could be grouped under one path. The general skeleton would then be (44):

$$\begin{aligned} \text{(44)} \quad & \begin{bmatrix} \text{conv} \\ \text{SLASH} \\ \text{LEXICAL} \\ \text{LEXICAL} \end{bmatrix} \begin{bmatrix} \text{conv} \\ \text{SLASH} \end{bmatrix} \end{aligned} $$

But if we rename lexical to head-dtr, we basically get the HPSG representation.

Hellwig (2003: 602) states that his special version of Dependency Grammar, which he calls Dependency Unification Grammar, assumes that governing heads select complete nodes with all their daughters. These nodes may differ in their properties from their head (p. 604). They are in fact constituents. So this very explicit and formalized variant of Dependency Grammar is very close to HPSG, as Hellwig states himself (p. 603).

<sup>36</sup>Hudson (2003: 391–392) is explicit about this: "In dependency analysis, the dependents modify the head word's meaning, so the latter carries the meaning of the whole phrase. For example, in *long books about linguistics*, the word *books* means 'long books about linguistics' thanks to the modifying effect of the dependents." For a concrete implementation of this idea see Figure 11.44.

An alternative is to assume different representational levels as in Meaning–Text Theory (Mel'čuk 1981). In fact the cont value in HPSG is also a different representational level. However, this representational level is in sync with the other structure that is built.

#### 11 Dependency Grammar

Hudson's Word Grammar (2018) is also explicitly worked out and, as will be shown, it is rather similar to HPSG. The representation in Figure 11.44 is a detailed description of what the abbreviated version in Figure 11.45 stands for. What is shown in the first-

Figure 11.44: Analysis of *Small children were playing outside.* according to Hudson (2018: 105)

Figure 11.45: Abbreviated analysis of *Small children were playing outside.* according to Hudson (2018: 105)

diagram is that a combination of two nodes results in a new node.<sup>37</sup> For instance, the combination of *playing* and *outside* yields *playing*′ , the combination of *small* and *children* yields *children*′ , and the combination of *children*′ and *playing*′ yields *playing*′′. The combination of *were* and *playing*′′ results in *were*′ and the combination of *children*′′ and *were*′ yields *were*′′. The only thing left to explain is why there is a node for *children* that is not the result of the combination of two nodes, namely *children*′′. The line with the triangle at the bottom stands for default inheritance. That is, the upper node inherits all properties from the lower node by default. Defaults can be overridden, that is, information at the upper node may differ from information at the dominated node. This makes it possible to handle semantics compositionally: nodes that are the result of the combination of two nodes have a semantics that is the combination of the meaning of the two combined nodes. Turning to *children* again, *children*′ has the property that it must be adjacent to *playing*, but since the structure is a raising structure in which *children* is raised to the subject of *were*, this property is overwritten in a new instance of *children*, namely *children*′′ .

<sup>37</sup>By assuming these additional nodes Hudson addresses earlier criticism by Dahl (1980), who pointed out that *ordinary* in *ordinary French house* does not refer to *house* but to *French house*. So there has to be a representation for *French house* somewhere. At least at the semantic level. Hudson's additional nodes and classical N nodes solve this problem as well.

The interesting point now is that we get almost a normal phrase structure tree if we replace the words in the diagram in Figure 11.44 by syntactic categories. The result of the replacement is shown in Figure 11.46. The only thing unusual in this graph (marked

Figure 11.46: Analysis of *Small children are playing outside.* with category symbols

by dashed lines) is that N′ is combined with V[*ing*] ′ and the mother of N′ , namely N′′ , is combined with V[*fin*] ′ . As explained above, this is due to the analysis of raising in Word Grammar, which involves multiple dependencies between a raised item and its heads. There are two N nodes (N′ and N′′) in Figure 11.46 and two instances of *children* in Figure 11.44. Apart from this, the structure corresponds to what an HPSG grammar would license. The nodes in Hudson's diagram which are connected with lines with triangles at the bottom are related to their children using default inheritance. This too is rather similar to those versions of HPSG that use default inheritance. For instance, Ginzburg & Sag (2000: 33) use a Generalized Head Feature Principle that projects all properties of the head daughter to the mother by default.

The conclusion of this section is that the only principled difference between phrase structure grammars and Dependency Grammar is the question of how much intermediate structure is assumed: is there a VP without the subject? Are there intermediate nodes for adjunct attachment? It is difficult to decide these questions in the absence of fully worked out proposals that include semantic representations. Those proposals that are worked out – like Hudson's and Hellwig's – assume intermediate representations, which makes these approaches rather similar to phrase structure-based approaches. If one compares the structures of these fully worked out variants of Dependency Grammar with phrase structure grammars, it becomes clear that the claim that Dependency Grammars are simpler is unwarranted. This claim holds for compacted schematic representations like Figure 11.45 but it does not hold for fully worked out analyses.

The simplicity claim is repeatedly made in Timothy Osborne's work (for example in Osborne & Groß 2016: 132; Osborne 2018b: 2). In a reply to Osborne (2018b), I mentioned some of the phenomena discussed above and pointed out that they are not captured

#### 11 Dependency Grammar

by simple dependency structures (Müller 2019b) and argued that additional structure is needed in order to account for these phenomena. Somewhat ironically, Osborne (2018a) worked out analyses in his reply that introduced a new concept (the Colocant) and additional structure to capture semantic groupings. By doing so, he proved the point made above and in Müller (2019b).

### **11.7.2.4 Non-headed constructions**

Hudson (1980: Section 4.E) discusses headless constructions like those in (45):

	- b. the biggest
	- c. the longer the stem
	- d. (with) his hat over his eyes

He argues that the terms *adjective* and *noun* should be accompanied by the term *substantive*, which subsumes both terms. Then he suggests that *if a rule needs to cover the constructions traditionally referred to as noun-phrases, with or without heads, it just refers to 'nouns', and this will automatically allow the constructions to have either substantives or adjectives as heads.* (p. 195) The question that has to be asked here, however, is what the internal dependency structure of substantive phrases like *the rich* would be. The only way to connect the items seems to be to assume that the determiner is dependent on the adjective. But this would allow for two structures of phrases like *the rich man*: one in which the determiner depends on the adjective and one in which it depends on the noun. So underspecification of part of speech does not seem to solve the problem. Of course all problems with non-headed constructions can be solved by assuming empty elements.<sup>38</sup> This has been done in HPSG in the analysis of relative clauses (Pollard & Sag 1994: Chapter 5). English and German relative clauses consist of a phrase that contains a relative word and a sentence in which the relative phrase is missing. Pollard & Sag assume an empty relativizer that selects for the relative phrase and the clause with a gap (Pollard & Sag 1994: 216–217). Similar analyses can be found in Dependency Grammar (Eroms 2000: 291).<sup>39</sup> Now, the alternative to empty elements are phrasal constructions.<sup>40</sup>

(i) die the Frau, woman von of deren whose Schwester sister ich I ein a Bild picture gesehen seen habe have 'the woman of whose sister I saw a picture'

<sup>38</sup>See Section 2.4.1 for the assumption of an empty head in a phrase structure grammar for noun phrases.

<sup>39</sup>The Dependency Grammar representations usually have a *d*- element as the head of the relative clause. However, since the relative pronoun is also present in the clause and since the *d*- is not pronounced twice, assuming an additional *d*- head is basically assuming an empty head.

Another option is to assume that words may have multiple functions: so, a relative pronoun may be both a head and a dependent simultaneously (Tesnière 2015: Chapter 246, §8–11; Kahane & Osborne 2015: xlvi; Kahane 2009: 129–130). At least the analysis of Kahane is an instance of the Categorial Grammar analysis that was discussed in Section 8.6 and it suffers from the same problems: if the relative pronoun is a head that selects for a clause that is missing the relative pronoun, it is not easy to see how this analysis extends to cases of pied-piping like (i) in which the extracted element is a complete phrase containing the relative pronoun rather than just the pronoun itself.

<sup>40</sup>See Chapter 19 on empty elements in general and Subsection 21.10.3 on relative clauses in particular.

Sag 1997 working on relative clauses in English suggested a phrasal analysis of relative clauses in which the relative phrase and the clause from which it is extracted form a new phrase. A similar analysis was assumed by Müller (1996c) and is documented in Müller (1999b: Chapter 10). As was discussed in Section 8.6 it is neither plausible to assume the relative pronoun or some other element in the relative phrase to be the head of the entire relative clause, nor is it plausible to assume the verb to be the head of the entire relative clause (pace Sag), since relative clauses modify Ns, something that projections of (finite) verbs usually do not do. So assuming an empty head or a phrasal schema seems to be the only option.

Chapter 21 is devoted to the discussion of whether certain phenomena should be analyzed as involving phrase structural configurations or whether lexical analyses are better suited in general or for modeling some phenomena. I argue there that all phenomena interacting with valence should be treated lexically. But there are other phenomena as well and Dependency Grammar is forced to assume lexical analyses for all linguistic phenomena. There always has to be some element on which others depend. It has been argued by Jackendoff (2008) that it does not make sense to assume that one of the elements in N-P-N constructions like those in (46) is the head.

	- b. dollar for dollar, student for student, point for point
	- c. face to face, bumper to bumper
	- d. term paper after term paper, picture after picture
	- e. book upon book, argument upon argument

Of course there is a way to model all the phenomena that would be modeled by a phrasal construction in frameworks like GPSG, CxG, HPSG, or Simpler Syntax: an empty head. Figure 11.47 shows the analysis of *student after student*. The lexical item for the empty N

would be very special, since there are no similar non-empty lexical nouns, that is, there is no noun that selects for two bare Ns and a P.

Bargmann (2015) pointed out an additional aspect of the N-P-N construction, which makes things more complicated. The pattern is not restricted to two nouns. There can be arbitrarily many of them:

(47) Day after day after day went by, but I never found the courage to talk to her.

So rather than an N-P-N pattern Bargmann suggests the pattern in (48), where '+' stands for at least one repetition of a sequence.

#### (48) N (P N)+

Now, such patterns would be really difficult to model in selection-based approaches, since one would have to assume that an empty head or a noun selects for an arbitrary number of pairs of the same preposition and noun or nominal phrase. Of course one could assume that P and N form some sort of constituent, but still one would have to make sure that the right preposition is used and that the noun or nominal projection has the right phonology. Another possibility would be to assume that the second N in N-P-N can be an N-P-N and thereby allow recursion in the pattern. But if one follows this approach it is getting really difficult to check the constraint that the involved Ns should have the same or at least similar phonologies.

One way out of these problems would of course be to assume that there are special combinatorial mechanisms that assign a new category to one or several elements. This would basically be an unheaded phrase structure rule and this is what Tesnière suggested: transfer rules (see Section 11.6.2.2). But this is of course an extension of pure Dependency Grammar towards a mixed model. See also Hudson (2021: 1476–1477) for a treatment of the N-P-N construction in Word Grammar involving a complex network of nodes, that is, something that leaves the normal descriptive devices of Dependency Grammars.

See Section 21.10 for the discussion of further cases which are probably problematic for purely selection-based grammars.

### **Exercises**

Provide the dependency graphs for the following three sentences:

(49) a. Ich I habe have einen a Mann man getroffen, met der who blonde blond Haare hair hat. has

'I have met a man who has blond hair.'


'That he will come tomorrow pleases us.'

You may use non-projective dependencies. For the analysis of relative clauses authors usually propose an abstract entity that functions as a dependent of the modified noun and as a head of the verb in the relative clause.

# **Further reading**

In the section on further reading in Chapter 3, I referred to the book called *Syntaktische Analyseperspektiven* 'Syntactic perspectives on analyses'. The chapters in this book have been written by proponents of various theories and all analyze the same newspaper article. The book also contains a chapter by Engel (2014), assuming his version of Dependency Grammar, namely *Dependent Verb Grammar*.

Ágel, Eichinger, Eroms, Hellwig, Heringer & Lobin (2003, 2006) published a handbook on dependency and valence that discusses all aspects related to Dependency Grammar in any imaginable way. Many of the papers have been cited in this chapter. Papers comparing Dependency Grammar with other theories are especially relevant in the context of this book: Lobin (2003) compares Dependency Grammar and Categorial Grammar, Oliva (2003) deals with the representation of valence and dependency in HPSG, Hudson (2021) discusses Dependency Grammar in general and compares his version of the theory, namely Word Grammar, with HPSG, and Bangalore, Joshi & Rambow (2003) describe how valence and dependency are covered in TAG. Hellwig (2006) compares rule-based grammars with Dependency Grammars with special consideration given to parsing by computer programs.

Osborne & Groß (2012) compare Dependency Grammar with Construction Grammar and Osborne, Putnam & Groß (2011) argue that certain variants of Minimalism are in fact reinventions of dependency-based analyses.

The original work on Dependency Grammar by Tesnière (1959) is also available in parts in German (Tesnière 1980) and in full in English (Tesnière 2015).

# **12 Tree Adjoining Grammar**

*Tree Adjoining Grammar* (TAG) was developed by Aravind Joshi at the University of Pennsylvania in the USA (Joshi, Levy & Takahashi 1975). Several important dissertations in TAG have been supervised by Aravind Joshi and Anthony Kroch at the University of Pennsylvania (e.g., Rambow 1994). Other research centers with a focus on TAG are Paris 7 (Anne Abeillé), Columbia University in the USA (Owen Rambow) and Düsseldorf, Germany (Laura Kallmeyer). Rambow (1994) and Gerdes (2002b) are more detailed studies of German.<sup>1</sup>

TAG and its variants with relevant extensions are of interest because it is assumed that this grammatical formalism can – with regard to its expressive power – relatively accurately represent what humans do when they produce or comprehend natural language. The expressive power of Generalized Phrase Structure Grammar was deliberately constrained so that it corresponds to context-free phrase structure grammars (Type-2 languages) and it has in fact been demonstrated that this is not enough (Shieber 1985, Culy 1985).<sup>2</sup> Grammatical theories such as HPSG and CxG can generate/describe so-called Type-0 languages and are thereby far above the level of complexity presently assumed for natural languages. The assumption is that this complexity lies somewhere between context-free and context-sensitive (Type-1) languages. This class is thus referred to as *mildly context-sensitive*. Certain TAG-variants are inside of this language class and it is assumed that they can produce exactly those structures that occur in natural languages. For more on complexity, see Section 12.6.3 and Chapter 17.

There are various systems for the processing of TAG grammars (Doran, Hockey, Sarkar, Srinivas & Xia 2000, Parmentier, Kallmeyer, Maier, Lichte & Dellert 2008, Kallmeyer, Lichte, Maier, Parmentier, Dellert & Evang 2008, Koller 2017). Smaller and larger TAG fragments have been developed for the following languages:


<sup>1</sup> Since my knowledge of French leaves something to be desired, I just refer to the literature in French here without being able to comment on the content.

<sup>2</sup> See Pullum (1986) for a historical overview of the complexity debate and G. Müller (2011) for argumentation for the non-context-free nature of German, which follows parallel to Culy with regard to the N-P-N construction (see Section 21.10.4).


Candito (1996) has developed a system for the representation of meta grammars which allows the uniform specification of crosslinguistic generalizations. This system was used by some of the projects mentioned above for the derivation of grammars for specific languages. For instance Kinyon, Rambow, Scheffler, Yoon & Joshi (2006) derive the verb second languages from a common meta grammar. Among those grammars for verb second languages is a grammar of Yiddish for which there was no TAG grammar until 2006.

Resnik (1992) combines TAG with a statistics component.

# **12.1 General remarks on representational format**

### **12.1.1 Representation of valence information**

Figure 12.1 shows so-called elementary trees. These are present in the lexicon and can be combined to create larger trees. Nodes for the insertion of arguments are specially

Figure 12.1: Elementary trees

marked (NP↓ in the tree for *laughs*). Nodes for the insertion of adjuncts into a tree are also marked (VP<sup>∗</sup> in the tree for *always*). Grammars where elementary trees always contain at least one word are referred to as *Lexicalized Tree Adjoining Grammar* (LTAG, Schabes, Abeillé & Joshi (1988)).

### **12.1.2 Substitution**

Figure 12.2 on the next page shows the substitution of nodes. Other subtrees have to be inserted into substitution nodes such as the NP node in the tree for *laughs*. The tree for *John* is inserted there in the example derivation.

Figure 12.3: Adjunction

### **12.1.3 Adjunction**

Figure 12.3 shows an example of how the adjunction tree for *always* can be used.

Adjunction trees can be inserted into other trees. Upon insertion, the target node (bearing the same category as the node marked with '\*') is replaced by the adjunction tree.

TAG differs considerably from the simple phrase structure grammars we encountered in Chapter 2 in that the trees extend over a larger domain: for example, there is an NP node in the tree for *laughs* that is not a sister of the verb. In a phrase structure grammar (and of course in GB and GPSG since these theories are more or less directly built on phrase structure grammars), it is only ever possible to describe subtrees with a depth of one level. For the tree for *laughs*, the relevant rules would be those in (1):

$$\begin{array}{rcl} \text{(1)} & \text{S} & \rightarrow \text{NP VP} \\ & \text{VP} \rightarrow \text{V} \\ & \text{V} & \rightarrow \text{laughs} \end{array}$$

In this context, it is common to speak of *locality domains*. The extension of the locality domain is of particular importance for the analysis of idioms (see Section 18.2).

TAG differs from other grammatical theories in that it is possible for structures to be broken up again. In this way, it is possible to use adjunction to insert any amount of material into a given tree and thereby cause originally adjacent constituents to end up being arbitrarily far away from each other in the final tree. As we will see in Section 12.5, this property is important for the analysis of long-distance dependencies without movement.

### **12.1.4 Semantics**

There are different approaches to the syntax-semantics interface in TAG. One possibility is to assign a semantic representation to every node in the tree. The alternative is to assign each elementary tree exactly one semantic representation. The semantics construction does not make reference to syntactic structure but rather the way the structure is combined. This kind of approach has been proposed by Candito & Kahane (1998) and then by Kallmeyer & Joshi (2003), who build on it. The basic mechanisms will be briefly presented in what follows.

In the literature on TAG, a distinction is made between derived trees and derivation trees. Derived trees correspond to constituent structure (the trees for *John laughs* and *John always laughs* in Figures 12.2 and 12.3). The derivation tree contains the derivational history, that is, information about how the elementary trees were combined. The elements in a derivation tree represent predicate-argument dependencies, which is why it is possible to derive a semantic derivation tree from them. This will be shown on the basis of the sentence in (2):

(2) Max likes Anouk.

The elementary tree for (2) and the derived tree are given in Figure 12.4 on the next page. The nodes in trees are numbered from top to bottom and from left to right. The result of this numbering of nodes for *likes* is shown in Figure 12.5 on the facing page. The topmost node in the tree for *likes* is S and has the position 0. Beneath S, there is an NP and a VP node. These nodes are again numbered starting at 1. NP has the position 1 and VP the position 2. The VP node has in turn two daughters: V and the object NP. V receives number 1 and the object NP 2. This makes it possible to combine these numbers and then it is possible to unambiguously access individual elements in the tree. The position for the subject NP is 1 since this is a daughter of S and occurs in first position. The object NP has the numeric sequence 2.2 since it is below the VP (the second daughter of S = 2) and occurs in second position (the second daughter of VP = 2).

With these tree positions, the derivation tree for (2) can be represented as in Figure 12.6 on the next page. The derivation tree expresses the fact that the elementary tree for *likes* was combined with two arguments that were inserted into the substitution positions 1

Figure 12.4: Elementary trees and derived tree for *Max likes Anouk.*

Figure 12.5: Node positions in the elementary tree for *likes*

Figure 12.6: Derivation tree for *Max likes Anouk.*

#### 12 Tree Adjoining Grammar

and 2.2. The derivation tree also contains information about what exactly was placed into these nodes.

Kallmeyer & Joshi (2003) use a variant of *Minimal Recursion Semantics* as their semantic representational formalism (Copestake, Flickinger, Pollard & Sag 2005). I will use a considerably simplified representation here, as I did in Section 9.1.6 on semantics in HPSG. For the elementary trees *Max*, *likes* and *Anouk*, we can assume the semantic representations in (3).

(3) Semantic representations for elementary trees:


In a substitution operation, a variable is assigned a value. If, for example, the elementary tree for *Max* is inserted into the subject position of the tree for *likes*, then x<sup>1</sup> is identified with x. In the same way, x<sup>2</sup> is identified with y if the tree for *Anouk* is inserted into the object position. The result of these combinations is the representation in (4):

(4) Combination of the meaning of elementary trees:


Kallmeyer & Joshi (2003) show how an extension of TAG, Multi-Component LTAG, can handle quantifier scope and discuss complex cases with embedded verbs. Interested readers are referred to the original article.

# **12.2 Local reordering**

In TAG, there is a family of trees for each word. In order to account for ordering variants, one can assume that there are six trees corresponding to a ditransitive verb and that each of these corresponds to a different ordering of the arguments. Trees are connected to one another via lexical rules. This lexical rule-based analysis is parallel to the one developed by Uszkoreit (1986b) in Categorial Grammar.

Alternatively, one could assume a format for TAG structures similar to what we referred to as the ID/LP format in the chapter on GPSG. Joshi (1987b) defines an elementary structure as a pair that consists of a dominance structure and linearization constraints. Unlike GPSG, linearization rules do not hold for all dominance rules but rather for a particular dominance structure. This is parallel to what we saw in Section 10.6.3 on Embodied-CxG. Figure 12.7 on the facing page shows a dominance tree with numbered nodes. If we combine this dominance structure with the linearization rules in (5), we arrive at the exact order that we would get with ordinary phrase structure rules, namely NP<sup>1</sup> V NP<sup>2</sup> .

Figure 12.7: Dominance structure with numbered nodes

(5) LP <sup>1</sup> = { 1 < 2, 2.1 < 2.2 }

If one specifies the linearization restrictions as in (6), all the orders in (7) are permitted, since the empty set means that we do not state any restrictions at all.

$$\begin{aligned} \text{(6)} \quad & \text{LP}\_2^{\alpha} = \{\} \\ \text{(7)} \quad & \text{a. } \text{NP}\_1 \text{ V NP}\_2 \\ & \text{b. } \text{NP}\_2 \text{ V NP}\_1 \end{aligned} $$

c. NP<sup>1</sup> NP<sup>2</sup> V d. NP<sup>2</sup> NP<sup>1</sup> V

e. V NP<sup>1</sup> NP<sup>2</sup>

f. V NP<sup>2</sup> NP<sup>1</sup>

This means that it is possible to derive all orders that were derived in GPSG with flat sentence rules despite the fact that there is a constituent in the tree that consists of NP and VP. Since the dominance rules include a larger locality domain, such grammars are called LD/LP grammars (local dominance/linear precedence) rather than ID/LP grammars (immediate dominance/linear precedence) (Joshi, Vijay-Shanker & Weir 1990).

Simple variants of TAG such as those presented in Section 12.1 cannot deal with reordering if the arguments of different verbs are scrambled as in (8).

(8) weil because ihm him.dat das the.acc Buch book jemand somebody.nom zu to lesen read versprochen promised hat<sup>3</sup> has 'because somebody promised him to read the book'

In (8), *das Buch* 'the book' is the object of *zu lesen* 'to read', and *ihm* 'him' and *jemand* 'somebody' are dependent on *versprochen* and *hat*, respectively. These cases can be analyzed by LD/LP-TAG developed by Joshi (1987b) and Free Order TAG (FO-TAG) (Becker, Joshi & Rambow 1991: 21) since both of these TAG variants allow for crossing edges.

Since certain restrictions cannot be expressed in FO-TAG (Rambow 1994: 48–50), socalled Multi-Component TAG was developed. Joshi, Becker & Rambow (2000) illustrate

<sup>3</sup> For more on this kind of examples, see Bech (1955).

#### 12 Tree Adjoining Grammar

the problem that simple LTAG grammars have with sentences such as (8) using examples such as (9):<sup>4</sup>

	- b. … daß that des the.gen Verbrechens crime der the.nom Detektiv detective den the.acc Verdächtigen suspect dem the.dat Klienten client [\_ \_ zu to überführen] indict versprach promised

In LTAG, the elementary trees for the relevant verbs look as shown in Figure 12.8. The

Figure 12.8: Elementary trees of an infinitive and a control verb

verbs are numbered according to their level of embedding. The NP arguments of a verb bear the same index as that verb and each has a superscript number that distinguishes it from the other arguments. The trees are very similar to those in GB. In particular, it is assumed that the subject occurs outside the VP. For non-finite verbs, it is assumed that the subject is realized by PRO. PRO is, like *e*, a phonologically empty pronominal category that also comes from GB. The left tree in Figure 12.8 contains traces in the normal position of the arguments and the relevant NP slots in higher trees positions. An interesting difference to other theories is that these traces only exist in the tree. They are not represented as individual entries in the lexicon as the lexicon only contains words and the corresponding trees.

<sup>4</sup> The authors use *versprochen hat* 'has promised' rather than *versprach* 'promised', which sounds better but does not correspond to the trees they use.

The tree for *versprach* 'promised' can be inserted into any S node in the tree for *zu überführen* 'to indict' and results in trees such as those in the Figures 12.9 and 12.10.

Figure 12.9: Analysis of the order NP<sup>2</sup> <sup>2</sup> NP<sup>1</sup> <sup>2</sup> NP<sup>1</sup> <sup>1</sup> NP<sup>2</sup> <sup>1</sup> V2V<sup>1</sup> : adjunction to the lowest S node

In Figure 12.9, the tree for *versprach* is inserted directly above the PRO NP and in Figure 12.10 above NP<sup>1</sup> 2 .

It is clear that it is not possible to derive a tree in this way where an argument of *überführen* 'to indict' occurs between the arguments of *versprach* 'promised'. Joshi, Becker & Rambow (2000) therefore suggest an extension of the LTAG formalism. In MC-TAG, the grammar does not consist of elementary trees but rather finite sets of elementary trees. In every derivational step, a set is selected and the elements of that set are simultaneously added to the tree. Figure 12.11 on the following page shows an elementary tree for *versprach* 'promised' consisting of multiple components. This tree contains a trace of NP<sup>1</sup> 1 that was moved to the left. The bottom-left S node and the top-right S node are connected by a dashed line that indicates the dominance relation. However, immediate dominance is not required. Therefore, it is possible to insert the two subtrees into another tree separately from each other and thereby analyze the order in Figure 12.12 on page 429, for example.

Other variants of TAG that allow for other constituent orders are V-TAG (Rambow 1994) and TT-MC-TAG (Lichte 2007).

Figure 12.10: Analysis of the order NP<sup>2</sup> <sup>2</sup> NP<sup>1</sup> <sup>1</sup> NP<sup>2</sup> <sup>1</sup> NP<sup>1</sup> <sup>2</sup> V2V<sup>1</sup> : adjunction to the S node between NP<sup>2</sup> 2 and NP<sup>1</sup> 2

Figure 12.11: Elementary tree set for *versprach* consisting of multiple components

Figure 12.12: Analysis of the order NP<sup>1</sup> <sup>1</sup> NP<sup>2</sup> <sup>2</sup> NP<sup>2</sup> <sup>1</sup> NP<sup>1</sup> <sup>2</sup> V2V<sup>1</sup> : adjunction to the S node between NP<sup>2</sup> 2 and NP<sup>1</sup> 2

# **12.3 Verb position**

The position of the verb can be analyzed in a parallel way to the GPSG analysis: the verb can be realized in initial or in final position in a given linearization domain. Since the verb position has an effect on the clause type and hence on semantics, a lexical rule-based analysis would be also viable: a tree with the finite verb in initial position is licensed by a lexical rule that takes a tree with the verb in final position as input. This would be similar to the analyses in GB, Minimalism, and HPSG.

# **12.4 Passive**

There is a possible analysis for the passive that is analogous to the transformations in Transformational Grammar: one assumes lexical rules that license a lexical item with a passive tree for every lexical item with an active tree (Kroch & Joshi 1985: 50–51).

Kroch & Joshi (1985: 55) propose an alternative to this transformation-like approach that more adequately handles so-called raising constructions. Their analysis assumes that arguments of verbs are represented in subcategorization lists. Verbs are entered into trees that match their subcategorization list. Kroch and Joshi formulate a lexical rule that corresponds to the HPSG lexical rule that was discussed on page 290, that is, an accusative object is explicitly mentioned in the input of the lexical rule. Kroch and Joshi then suggest a complex analysis of the impersonal passive which uses a semantic null role for a non-realized object of intransitive verbs (p. 56). Such an analysis with abstract auxiliary entities can be avoided easily: one can instead use the HPSG analysis going back to Haider (1986a), which was presented in Section 9.2.

There are also proposals in TAG that use inheritance to deal with valence changing processes in general and the passive in particular (Candito 1996 and Kinyon, Rambow, Scheffler, Yoon & Joshi 2006 following Candito). As we saw in Section 10.2 of the Chapter on Construction Grammar, inheritance is not a suitable descriptive tool for valence changing processes. This is because these kinds of processes interact syntactically and semantically in a number of ways and can also be applied multiple times (Müller 2006, 2007b; 2007a: Section 7.5.2; 2013c; 2014a). See also Section 21.4 of this book.

# **12.5 Long-distance dependencies**

The analysis of long-distance dependencies in TAG is handled with the standard apparatus: simple trees are inserted into the middle of other trees. Figure 12.13 on the facing page shows an example of the analysis of (10):

(10) Who did John tell Sam that Bill likes \_ ?

The tree for *WH COMP NP likes \_* belongs to the tree family of *likes* and is therefore present in the lexicon. The tree for *tell* is adjoined to this tree, that is, this tree is inserted in the middle of the tree for *who that Bill likes \_* . Such an insertion operation can be applied multiple times so that sentences such as (11) where *who* is moved across multiple sentence boundaries can be analyzed:

(11) Who did John tell Sam that Mary said that Bill likes \_ ?

There is another important detail: although the tree for (12) has the category S, (12) is not a grammatical sentence of English.

(12) \* who that Bill likes

This has to be captured somehow. In TAG, the marking OA ensures that a tree counts as incomplete. If a tree contains a node with marking OA, then an obligatory adjunction operation must take place at the relevant position.

# **12.6 New developments and theoretical variants**

This section contains advanced material. I nevertheless suggest that readers deal with the TAG variants that are introduced here. The point is that such more complex variants are needed and if one just looks at the examples covered so far, TAG seems to be

Figure 12.13: Analysis of long-distance dependencies in TAG

much simpler than other frameworks. But this would be an unfair comparison, since more complex formal machinery is needed for a more adequate coverage of linguistic phenomena.

In Section 12.2, I introduced Multi-Component-TAG. There are a large number of TAG variants with different formal properties. Rambow (1994: ) gives an overview of the variants existing in 1994. In the following, I will discuss two interesting variants of TAG: Feature Structure-Based TAG (FTAG, Vijay-Shanker & Joshi 1988) and Vector-TAG (V-TAG, Rambow 1994).

### **12.6.1 FTAG**

In FTAG, nodes are not atomic (N, NP, VP or S), but instead consist of feature descriptions. With the exception of substitution nodes, each node has a top structure and a

#### 12 Tree Adjoining Grammar

bottom structure. The top structure says something about what kind of properties a given tree has inside a larger structure, and the bottom structure says something about the properties of the structure below the node. Substitution nodes only have a top structure. Figure 12.14 shows an example tree for *laughs*. A noun phrase can be combined

Figure 12.14: Elementary trees for *John* and *laughs* in FTAG

with the tree for *laughs* in Figure 12.14. Its top structure is identified with the NP node in the tree for *laughs*. The result of this combination is shown in Figure 12.15 on the facing page.

In a complete tree, all top structures are identified with the corresponding bottom structures. This way, only sentences where the subject is in third person singular can be analyzed with the given tree for *laughs*, that is, those in which the verb's agreement features match those of the subject.

For adjunction, the top structure of the tree that is being inserted must be unifiable with the top structure of the adjunction site, and the bottom structure of the node marked '\*' in the inserted tree (the so-called foot node) must be unifiable with the adjunction site.

The elementary trees discussed so far only consisted of nodes where the top part matched the bottom part. FTAG allows for an interesting variant of specifying nodes that makes adjunction obligatory in order for the entire derivation to be well-formed. Figure 12.16 on the next page shows a tree for *laughing* that contains two VP nodes with incompatible mode values. In order for this subtree to be used in a complete structure, another tree has to be added so that the two parts of the VP node are separated. This happens by means of an auxiliary tree as shown in Figure 12.16. The highest VP node

Figure 12.15: Combination of the trees for *John* and *laughs* in FTAG

Figure 12.16: Obligatory adjunction in FTAG

.

#### 12 Tree Adjoining Grammar

of the auxiliary tree is unified with the upper VP node of *laughing*. The node of the auxiliary tree marked with '\*' is unified with the lower VP node of *laughing*. The result of this is given in Figure 12.17.

Figure 12.17: Result of obligatory adjunction in FTAG

If a tree is used as a final derivation, the top structures are identified with the bottom structures. Thus, the agr value of the highest VP node is identified with that of the lower one in the tree in Figure 12.17. As such, only NPs that have the same agr value as the auxiliary can be inserted into the NP slot.

This example shows that, instead of the marking for obligatory adjunction that we saw in the section on long-distance dependencies, the same effect can be achieved by using incompatible feature specifications on the top and bottom structures. If there are incompatible top and bottom structures in a tree, then it cannot be a final derivation tree and therefore this means that at least one adjunction operation must still take place in order to yield a well-formed tree.

### **12.6.2 V-TAG**

V-TAG is a variant of TAG proposed by Owen Rambow (1994) that also assumes feature structures on nodes. In addition, like MC-TAG, it assumes that elementary trees consist of multiple components. Figure 12.18 shows the elementary lexical set for the ditransitive verb *geben* 'give'. The lexicon set consists of a tree for the verb, an empty element of the

Figure 12.18: Lexicon set for *geben* 'to give' in V-TAG according to Rambow (1994: 6)

category VP and three trees where a VP has been combined with an NP. As in MC-TAG, dominance relations are also indicated. The dominance constraints in Figure 12.18 ensure that all lower VP nodes dominate the highest VP node of the tree further to the right. The order of the arguments of the verb as well as the position of the verb is not given. The only thing required is that lower VP in the NP trees and lower VP in the *geben* tree dominate the empty VP node. With this lexicon set, it is possible to derive all permutations of the arguments. Rambow also shows how such lexical entries can be used to analyze sentences with verbal complexes. Figure 12.19 on the following page shows a verbal complex formed from *zu reparieren* 'to repair' and *versprochen* 'promised' and the relevant dominance constraints. Both of the first NP trees have to dominate *versprochen* and the third and fourth NP tree have to dominate *zu reparieren*. The order of the NP trees is not restricted and thus all permutations of NPs can be derived.

The interesting thing here is that this approach is similar to the one proposed by Berman (1996: Section 2.1.3) in LFG (see Section 7.4): in Berman's analysis, the verb projects directly to form a VP and the arguments are then adjoined.

A difference to other analyses discussed in this book is that there is always an empty element in the derived trees regardless of verb position.

# **12.6.3 The competence-performance distinction and the generative capacity of tree-local MC-LTAG**

In many of the theories discussed in this book, a distinction is made between competence and performance (Chomsky 1965: Section I.1). Competence theories are supposed to describe linguistic knowledge, whereas a performance theory should explain how lin-

Figure 12.19: Analysis of the verbal complex *zu reparieren versprochen* in V-TAG

guistic knowledge is used and why we make mistakes during speech production and comprehension, etc. See Chapter 15 for further discussion.

Joshi, Becker & Rambow (2000) discuss examples of center self embedding of relative clauses as those in (13b), and follow Chomsky & Miller (1963: 286) in the assumption that the fact that this kind of embedding is only possible up to three levels should not be described by grammar, but is rather due to processing problems with the hearer independent of their principle abilities with regard to grammar.

	- b. dass that der the Hund, dog [<sup>1</sup> der that die the Katze, cat [<sup>2</sup> die that die the Maus mouse gefangen caught hat, <sup>2</sup> ] has jagt <sup>1</sup> ] chases bellt barks

What is interesting in this context is that it is possible to construct examples of center embedding so that they are easier to process for the hearer. In this way, it is possible to increase the number of center embeddings possible to process by one and therefore to show that all grammars that formulate a restriction that there may be at most two centerembedded relative clauses are incorrect. The following example from Hans Uszkoreit is easier to process since all embedded relative clauses are isolated and the verbs are separated by material from the higher clause.

(14) Die the Bänke, benches [1 auf on denen which damals back.then die the Alten old.people des of.the Dorfes, village [<sup>2</sup> die that allen all Kindern, children [<sup>3</sup> die that vorbeikamen came.by 3 ], freundliche friendly Blicke glances zuwarfen gave 2 ], lange long Stunden hours

schweigend silent nebeneinander next.to.each.other saßen sat 1 ], mussten must im in.the letzten last Jahr year einem a Parkplatz car.park weichen. give.way.to

'The benches on which the older residents of the village, who used to give friendly glances to all the children who came by, used to sit silently next to one another had to give way to a car park last year.'

For other factors that play a role in processing, see Gibson (1998).

Joshi et al. (2000) discuss verbal complexes with reordered arguments. The general pattern that they discuss has the form shown in (15):

(15) (NP<sup>1</sup> NP<sup>2</sup> … NP ) VV−<sup>1</sup> … V<sup>1</sup>

Here, stands for any permutation of noun phrases and V<sup>1</sup> is the finite verb. The authors investigate the properties of Lexicalized Tree Adjoining Grammar (LTAG) with regard to this pattern and notice that LTAG cannot analyze the order in (16) if the semantics is supposed to come out correctly.

(16) NP<sup>2</sup> NP<sup>3</sup> NP<sup>1</sup> V3V2V<sup>1</sup>

Since (17) is possible in German, LTAG is not sufficient to analyze all languages.

(17) dass that ihm<sup>2</sup> him das the Buch<sup>3</sup> book niemand<sup>1</sup> nobody zu to lesen<sup>3</sup> read versprechen<sup>2</sup> promise darf<sup>1</sup> be.allowed.to 'that nobody is allowed to promise him to read the book'

Therefore, they propose the extension of TAG discussed in Section 12.2; so-called *treelocal multi-component LTAG* (Tree-local MC-LTAG or TL-MCTAG). They show that TL-MCTAG can analyze (17) but not (18) with the correct semantics. They claim that these orders are not possible in German and argue that in this case, unlike the relative clause examples, one has both options, that is, the unavailability of such patterns can be explained as a performance phenomenon or as a competence phenomenon.

(18) NP<sup>2</sup> NP<sup>4</sup> NP<sup>3</sup> NP<sup>1</sup> V4V3V2V<sup>1</sup>

If we treat this as a performance phenomenon, then we are making reference to the complexity of the construction and the resulting processing problems for the hearer. The fact that these orders do not occur in corpora can be explained with reference to the principle of cooperativeness. Speakers normally want to be understood and therefore formulate their sentences in such a way that the hearer can understand them. Verbal complexes in German with more than four verbs are hardly ever found since it is possible to simplify very complex sentences with multiple verbs in the right sentence bracket by extraposing material and therefore avoiding ambiguity (see Netter 1991: 5 and Müller 2007a: 262).

The alternative to a performance explanation would involve using a grammatical formalism which is just powerful enough to allow embedding of two verbs and reordering

#### 12 Tree Adjoining Grammar

of their arguments, but rules out embedding of three verbs and reordering of the arguments. Joshi et al. (2000) opt for this solution and therefore attribute the impossibility of the order of arguments in (18) to competence.

In HPSG (and also in Categorial Grammar and in some GB analyses), verbal complexes are analyzed by means of argument composition (Hinrichs & Nakazawa 1989a, 1994a). Under this approach, a verbal complex behaves exactly like a simplex verb and the arguments of the verbs involved can be placed in any order. The grammar does not contain any restriction on the number of verbs that can be combined, nor any constraints that ban embedding below a certain level. In the following, I will show that many reorderings are ruled out by communication rules that apply even with cases of simple twoplace verbs. The conclusion is that the impossibility of embedding four or more verbs should in fact be explained as a performance issue.

Before I present arguments against a competence-based exclusion of (18), I will make a more general comment: corpora cannot help us here since one does not find any instances of verbs with four or more embeddings. Bech (1955) provides an extensive collection of material, but had to construct the examples with four embedded verbs. Meurers (1999b: 94–95) gives constructed examples with five verbs that contain multiple auxiliaries or modal verbs. These examples are barely processable and are not relevant for the discussion here since the verbs in (18) have to select their own arguments. There are therefore not that many verbs left when constructing examples. It is possible to only use subject control verbs with an additional object (e.g., *versprechen* 'to promise'), object control verbs (e.g., *zwingen* 'to force') or AcI verbs (e.g., *sehen* 'to see' or *lassen* 'to let') to construct examples. When constructing examples, it is important make sure that all the nouns involved differ as much as possible with regard to their case and their selectional restrictions (e.g., animate/inanimate) since these are features that a hearer/reader could use to possibly assign reordered arguments to their heads. If we want to have patterns such as (18) with four NPs each with a different case, then we have to choose a verb that governs the genitive. There are only a very small number of such verbs in German. Although the example constructed by Joshi et al. (2000) in (9b) fulfills these requirements, it is still very marked. It therefore becomes clear that the possibility of finding a corresponding example in a newspaper article is extremely small. This is due to the fact that there are very few situations in which such an utterance would be imaginable. Additionally, all control verbs (with the exception of *helfen* 'to help') require an infinitive with *zu* 'to' and can also be realized incoherently, that is, with an extraposed infinitival complement without verbal complex formation. As mentioned above, a cooperative speaker/ author would use a less complex construction and this reduces the probability that these kinds of sentences arise even further.

Notice that tree-local MC-LTAG does not constrain the number of verbs in a sentence. The formalism allows for an arbitrary number of verbs. It is therefore necessary to assume, as in other grammatical theories, that performance constraints are responsible for the fact that we never find examples of verbal complexes with five or more verbs. Treelocal MC-LAG makes predictions about the possibility of arguments to be reordered. I consider it wrong to make constraints regarding mobility of arguments dependent on

the power of the grammatical formalism since the restrictions that one finds are independent of verbal complexes and can be found with simplex verbs taking just two arguments. The problem with reordering is that it still has to be possible to assign the noun phrases to the verbs they belong to. If this assignment leads to ambiguity that cannot be resolved by case, selectional restrictions, contextual knowledge or intonation, then the unmarked constituent order is chosen. Hoberg (1981: 68) shows this very nicely with examples similar to the following:<sup>5</sup>

	- b. # Hanna Hanna hat has immer always schon already gewußt, known daß that sie she das the Kind child verlassen leave will. wants Preferred reading: 'Hanna has always known that she wants to leave the child.'
	- c. Hanna Hanna hat has immer always schon already gewußt, known daß that sie she der the.nom Mann man verlassen leave will.

wants.to

'Hanna has always known that the man wants to leave her.'

It is not possible to reorder (19a) to (19b) without creating a strong preference for another reading. This is due to the fact that neither *sie* 'she' nor *das Kind* 'the child' are unambiguously marked as nominative or accusative. (19b) therefore has to be interpreted as Hanna being the one that wants something, namely to leave the child. This reordering is possible, however, if at least one of the arguments is unambiguously marked for case as in (19c).

For noun phrases with feminine count nouns, the forms for nominative and accusative as well as genitive and dative are the same. For mass nouns, it is even worse. If they are used without an article, all cases are the same for feminine nouns (e.g., *Milch* 'milk') and also for masculines and neuters with exception of the genitive. In the following example from Wegener (1985: 45) it is hardly possible to switch the dative and accusative object, whereas this is possible if the nouns are used with articles as in (20c,d):

	- b. Sie she mischt mixes Wasser water Wein wine bei. into 'She mixes wine into the water.'

<sup>5</sup> Instead of *das* 'the', Hoberg uses the possessive pronoun *ihr* 'her'. This makes the sentences more semantically plausible, but one then gets interference from the linearization requirements for bound pronouns. I have therefore replaced the pronouns with the definite article.


The two nouns can only be switched if the meaning of the sentence is clear from the context (e.g., through explicit negation of the opposite) and if the sentence carries a certain intonation.

The problem with verbal complexes is now that with four noun phrases, two of them almost always have the same case if one does not wish to resort to the few verbs governing the genitive. A not particularly nice-sounding example of morphologically unambiguously marked case is (21):

(21) weil because er he.nom den the.acc Mann man dem the.dat Jungen boy des of.the.gen Freundes friend gedenken remember helfen help lassen let will wants

'because he wants to let the man help the boy remember his friend'

Another strategy is to choose verbs that select animate and inanimate objects so that animacy of the arguments can aid interpretation. I have constructed such an example where the most deeply embedded predicate is not a verb but rather an adjective. The predicate *leer fischen* 'to fish empty' is a resultative construction that should be analyzed parallel to verbal complexes (Müller 2002a: Chapter 5).

(22) weil because niemand<sup>1</sup> nobody.nom [den the.acc Mann]<sup>2</sup> man [der the.dat Frau]<sup>3</sup> woman [diesen this.acc Teich]<sup>4</sup> pond leer<sup>4</sup> empty fischen<sup>3</sup> fish helfen<sup>2</sup> help sah<sup>1</sup> saw 'because nobody saw the man help the woman fish the pond empty'

If one reads the sentences with the relevant pauses, it is comprehensible. Case is unambiguously marked on the animate noun phrases and our word knowledge helps us to interpret *diesen Teich* 'this pond' as the argument of *leer* 'empty'.

The sentence in (22) would correctly be analyzed by an appropriately written treelocal MC-LTAG and also by argument composition analyses for verbal complexes and resultative constructions. The sentence in (23) is a variant of (22) that corresponds exactly to the pattern of (18):

(23) weil because [der the.dat Frau]<sup>2</sup> woman [diesen this.acc Teich]<sup>4</sup> pond [den the.acc Mann]<sup>3</sup> man niemand<sup>1</sup> nobody.nom leer<sup>4</sup> empty fischen<sup>3</sup> fish helfen<sup>2</sup> help sah<sup>1</sup> saw

'because nobody saw the man help the woman fish the pond empty'

(23) is more marked than (22), but this is always the case with local reordering (Gisbert Fanselow, p. c. 2006). This sentence should not be ruled out by the grammar. Its markedness is more due to the same factors that were responsible for the markedness of reordering of arguments of simplex verbs. Tree-local MC-LTAG can not correctly analyze sentences such as (23), which shows that this TAG variant is not sufficient for analyzing natural language.

There are varying opinions among TAG researchers as to what should be counted as competence and what should be counted as performance. For instance, Rambow (1994: 15) argues that one should not exclude reorderings that cannot be processed by means of competence grammar or the grammatical formalism. In Chapter 6, he presents a theory of performance that can explain why the reordering of arguments of various verbs in the middle field is harder to process. One should therefore opt for TAG variants such as V-TAG or TT-MC-TAG (Lichte 2007) that are powerful enough to analyze the diverse reorderings and then also use a performance model that makes it possible to explain the gradual differences in acceptability.

An alternative to looking for a grammatical formalism with minimal expressive power is to not restrict the grammatical formalism at all with regard to its expressive power and instead develop as restrictive linguistic theories as possible. For further discussion of this point, see Chapter 17.

# **12.7 Summary and classification**

In sum, we have seen the following: LTAG is lexicalized, that is, there is at least one lexical element in every tree. There are not any trees that correspond to the rule S → NP VP since no words are mentioned in this rule. Instead, there are always complex trees that contain the subject NP and the VP. Inside the VP, there can be as much structure as is necessary to ensure that the verb is contained in the tree. As well as the head, elementary trees in LTAG always contain the arguments of the head. For transitive verbs, this means that both the subject and the object have to be components of the elementary tree. This is also true of the trees used to analyze long-distance dependencies. As shown in Figure 12.13, the object must be part of the tree. The fact that the object can be separated from the verb by multiple sentence boundaries is not represented in the elementary tree, that is, recursive parts of grammar are not contained in elementary trees. The relevant effects are achieved by adjunction, that is, by insertion of material into elementary trees. The elementary tree for extraction in Figure 12.13 differs from the elementary tree for *likes* given in Figure 12.4 for the use in normal SVO clauses. Every minimal construction, in which *likes* can occur (subject extraction, topicalization, subject relative clauses, object relative clauses, passive, …) needs its own elementary tree (Kallmeyer & Joshi 2003: 10). The different elementary trees can be connected using lexical rules. These lexical rules map a particular tree treated as underlying to other trees. In this way, it is possible to derive a passive tree from an active tree. These lexical rules are parallel to transformations in Transformational Grammar, however, one should bear in mind that there is always a lexical element in the tree, which makes the entire grammar more restrictive than grammars with free transformations.

An interesting difference to GB and variants of LFG, CG, and HPSG that assume empty elements is that the variants of TAG presented here<sup>6</sup> do not contain empty elements in the lexicon. They can be used in trees but trees are listed as a whole in the lexicon.

Elementary trees can be of any size, which makes TAG interesting for the analysis of idioms (see Section 18.2). Since recursion is factored out, trees can contain elements that appear very far away from each other in the derived tree (extended domains of locality).

Kasper, Kiefer, Netter & Vijay-Shanker (1995) show that it is possible to transfer HPSG grammars that fulfill certain requirements into TAG grammars. This is interesting as in this way one arrives at a grammar whose complexity behavior is known. Whereas HPSG grammars are generally in the Type-0 area, TAG grammars can, depending on the variant, fall into the realm of Type-2 languages (context-free) or even in the larger set of the mildly context-sensitive grammars (Joshi 1985). Yoshinaga, Miyao, Torisawa & Tsujii (2001) have developed a procedure for translating FB-LTAG grammars into HPSG grammars.

## **Comprehension questions**


**Exercises**

	- (24) der the dem the.dat König king treue loyal Diener servant 'the servant loyal to the king'

<sup>6</sup> See Rambow (1994) and Kallmeyer (2005: 194), however, for TAG analyses with an empty element in the lexicon.

# **Further reading**

Some important articles are Joshi, Levy & Takahashi (1975), Joshi (1987a), and Joshi & Schabes (1997). Many works discuss formal properties of TAG and are therefore not particularly accessible for linguistically interested readers. Kroch & Joshi (1985) give a good overview of linguistic analyses. An overview of linguistic and computational linguistic works in TAG can be found in the volume edited by Abeillé and Rambow from 2000. Rambow (1994) compares his TAG variant (V-TAG) to Karttunen's *Radical Lexicalism* approach, Uszkoreit's GPSG, Combinatorial Categorial Grammar, HPSG and Dependency Grammar.

Shieber & Johnson (1993) discuss psycholinguistically plausible processing models and show that it is possible to do incremental parsing with TAG. They also present a further variant of TAG: synchronous TAG. In this TAG variant, there is a syntactic tree and a semantic tree connected to it. When building syntactic structure, the semantic structure is always built in parallel. This structure built in parallel corresponds to the level of Logical Form derived from S-structure using transformations in GB.

Rambow (1994: Chapter 6) presents an automaton-based performance theory. He applies it to German and shows that the processing difficulties that arise when reordering arguments of multiple verbs can be explained.

Kallmeyer & Romero (2008) show how it is possible to derive MRS representations directly via a derivation tree using FTAG. In each top node, there is a reference to the semantic content of the entire structure and each bottom node makes reference to the semantic content below the node. In this way, it becomes possible to insert an adjective (e.g., *mutmaßlichen* 'suspected') into an NP tree *alle Mörder* 'all murderers' so that the adjective has scope over the nominal part of the NP (*Mörder* 'murderers'): for adjunction of the adjective to the N node, the adjective can access the semantic content of the noun. The top node of *mutmaßlichen* is then the top node of the combination *mutmaßlichen Mörder* 'suspected murderers' and this ensures that the meaning of *mutmaßlichen Mörder* is correctly embedded under the universal quantifier.

# **Part II**

# **General discussion**

# **Preface for Part II**

This book is very long. For technical reasons it was split into two parts in the print version of earlier editions. As of 2020 the book can be published as one volume, but there are still the two parts and I think it is helpful for the readers to keep this preface for Part II.

The first part contains the introduction to all the theories and the second part is a collection of topics that are relevant for more than one theory, so it would be inappropriate to discuss them within one of the chapters of Part I. While Part I has a more introductory character and can be used for teaching BA and MA students, the material in Part II is for more advanced readers. I never used it for teaching; it may be a good resource for classes on these topics nevertheless. In what follows, I give a brief overview of the chapters of Part II.

Chapter 13 concerns the assumption of innate domain-specific knowledge, some sort of Universal Grammar. This is probably the hottest debate in linguistics and the side one takes in this debate has severe consequences for the theories that one considers acceptable. In Mainstream Generative Grammar (MGG), lots of invisible elements are postulated and in some versions of MGG, it is claimed that these are present in the grammars of all languages of the world, even though there is no direct evidence for these categories in some of the languages. Usually the motivation for assuming an empty element is that there is another language with visible material in the respective position. Whether one considers such an argumentation as legitimate crucially depends on whether one believes in innate domain-specific knowledge. Chapter 13 tries to summarize the discussion and to show that for all claims regarding the existence of Universal Grammar there are counterclaims. Hauser, Chomsky & Fitch (2002) greatly revised Chomsky's assumptions regarding UG. According to them, UG contains rather few and abstract constraints, but nevertheless UG lives on in the theories developed today and in the way arguments for them are made. Hence, a chapter like Chapter 13 is important to understand the discussions and the alternatives to the theories developed in MGG.

Chapter 14 deals with the difference of generative-enumerative and model-theoretic approaches. Generative-enumerative approaches (basically phrase-structure grammars and MGG variants of it) enumerate a set of strings considered to be well-formed with respect to a grammar and possibly a set of transformations. The model-theoretic view does not say anything about sets but rather deals with formulating well-formedness conditions for utterances. While this seems to be very similar at first glance, there are interesting differences in various respects. Chapter 14 deals with utterance fragments and graded acceptability and discusses alleged problems for model-theoretic approaches.

Chapter 15 introduces the competence/performance distinction. Some researchers reject this distinction completely (most of the researchers working within CxG), others

assume it and try to develop models that are performance-compatible (HPSG, LFG, CG, TAG) and others develop models that are highly implausible from a performance point of view (lots of Minimalist work). Chapter 15 introduces the concepts, discusses whether it makes sense to distinguish competence and performance (I think it does) and examines what is required for performance-compatible competence models.

It is often argued that theory X must be wrong since it does not explain how language can be acquired. Interestingly, these accusations go both ways: Construction Grammarians criticize Minimalists for assuming half of their theory to be hard coded in our genetic material and Minimalists claim that constructionist theories do not have an explanation for there being recursive structures. Chapter 16 discusses the major approaches to acquisition and explains where they differ and what shortcomings exist.

Chapter 17 deals with the generative capacity of grammar formalisms. The generative capacity played an important role in the history of generative grammar. Early versions turned out to be too powerful, resulting in quite radical changes of the framework in the 1970s and 1980s. One key advantage of GPSG was that it was much more restrictive than the transformational approaches that were around at the time. As it turned out, it was too restrictive to model language as such since there were languages that could be shown to require more powerful machinery. This lead to the development of HPSG. The HPSG formalism has Turing power, which is the worst complexity a formalism can have. Somewhat ironically, most proponents of HPSG do not care about this at all. Chapter 17 explains why.

Chapter 18 is a brief chapter including the discussion of three topics that come up again and again: binary branching vs. flat structures, locality and recursion. Some researchers argue (without proof) that all theories should assume binary branching structures because otherwise the grammars are not acquirable, while others argue for the opposite view, again often with acquisition arguments (Section 18.1). Locality is an issue that is important both in Minimalism and in other theories like HPSG and LFG. However, there are differences in what is understood by locality. Section 18.2 deals with these issues. Languages usually have recursive structures. So frameworks have to have ways to account for this. All the frameworks discussed in this book do account for recursion despite claims to the contrary. This is topic of Section 18.3.

Chapter 19 deals with empty elements. While some theories have more invisible units in their trees than visible ones (see Figure 4.21 on page 151), there are frameworks that do not assume any empty elements, referring to acquisition again. The chapter shows that certain grammars with empty elements can be converted into grammars without empty elements. I show that empty elements are not required for semantic reasons; underspecification can be used instead. The chapter shows how important the assumptions about UG (Chapter 13) are: if empty elements correspond to visible material that in certain situations occupies the place of the empty element, then there are chances for them to be acquired from data. If the empty elements are stipulated with reference to material in other languages, there is a real acquisition problem. A final section shows that (some) empty elements can be replaced by lexical rules (or templates), which correspond to a certain type of transformation. This relativizes the debate that arose around stipulating such theoretical entities.

When it comes to mechanisms that are used in different frameworks, there is a further difference between (some variants of) GB/Minimalism and the other theories. Some theories in the transformational frameworks assume that extraction, scrambling and passive are dealt with using the same descriptive tool: movement. Other theories use lexical rules, different phrasal schemata and slash propagation techniques. Chapter 20 shows that phenomena like the so-called remote passive that seem to require movement can be dealt with without movement. The chapter repeats an example from the GB chapter that showed that the movement-based analysis of the German passive is problematic, since nothing moves in German sentences. Passive is a phenomenon that is independent of movement; it is just English that is SVO and requires a subject before the verb. Since German does not require subjects, nothing has to be reordered. Dependency Grammar proposals assuming the same descriptive tool for the three phenomena are discussed as well, their shortcomings are pointed out and it is concluded that the three phenomena are independent or at least have to be distinguishable in terms of their treatment in a theory.

Chapter 21 discusses a further highly controversial issue: the question of whether language consists of phrasal patterns or whether there are abstract combinatorial rules that combine lexical items that contain rich information. Again, questions of language acquisition play a role here. This chapter is rather long but it reflects the complexity of the discussion in the literature. Many arguments for and against phrasal constructions are evaluated and it is shown that grammars have to be able to account for phrasal patterns, but that so-called argument structure constructions are better treated lexically. This discussion connects nicely to the GPSG–HPSG transition in the 1980s where researchers switched from a phrasal model to a lexical one in the spirit of Categorial Grammar.

The brief Chapter 22 is related to the topic of Chapter 21 but it compares the TAG, LFG and HPSG approaches to complex predicates and points out that there are problems with a certain treatment of complex predicates in TAG. TAG is very similar in spirit to phrasal Construction Grammar approaches: the elementary trees of TAG are phrasal patterns. It is shown that lexical models like HPSG have an advantage over the phrasal TAG approach since they specify lexical potential of items, which is not necessarily the same as actual realization of dependents. It is pointed out that the LFG analysis of complex predicates allowing for overrides of lexical information at phrasal nodes lies somewhere in the middle between TAG and HPSG.

Chapter 23 showcases a way to develop linguistic theories that capture cross-linguistic generalizations. It differs from the general top-down approach in Minimalism, where it is assumed that certain constraints hold for all languages. Instead, constraints that hold for known languages are collected in sets in a bottom-up way. The most general set contains all constraints that hold for all known languages. This approach is independent of the assumption of an innate UG and hence compatible with both CxG and Minimalism. The chapter also contains some speculations on how to integrate universal constraints in an Cinque-like UG way in case there turns out to be an empirical basis for this.

Chapter 24 draws some conclusions as to what an appropriate framework for describing languages should look like.

# **13 The innateness of linguistic knowledge**

If we try and compare the theories presented in this book, we notice that there are a number of similarities.<sup>1</sup> In all of the frameworks, there are variants of theories that use feature-value pairs to describe linguistic objects. The syntactic structures assumed are sometimes similar. Nevertheless, there are some differences that have often led to fierce debates between members of the various schools. Theories differ with regard to whether they assume transformations, empty elements, phrasal or lexical analyses, binary branching or flat structures.

Every theory has to not only describe natural language, but also explain it. It is possible to formulate an infinite number of grammars that license structures for a given language (see Exercise 1 on page 78). These grammars are *observationally adequate*. A grammar achieves *descriptive adequacy* if it corresponds to observations and the intuitions of native speakers.<sup>2</sup> A linguistic theory is descriptively adequate if it can be used to formulate a descriptively adequate grammar for every natural language. However, grammars achieving descriptive adequacy do not always necessarily reach *explanatory adequacy*. Grammars that achieve explanatory adequacy are those that are compatible with acquisition data, that is, grammars that could plausibly be acquired by human speakers (Chomsky 1965: 24–25).

Chomsky (1965: 25) assumes that children already have domain-specific knowledge about what grammars could, in principle, look like and then extract information about what a given grammar actually looks like from the linguistic input. The most prominent

<sup>1</sup> The terms *theory* and *framework* may require clarification. A framework is a common set of assumptions and tools that is used when theories are formulated. In this book, I discussed theories of German. These theories were developed in certain frameworks (GB, GPSG, HPSG, LFG, …) and of course there are other theories of other languages that share the same fundamental assumptions. These theories differ from the theories of German presented here but are formulated in the same framework. Haspelmath (2010b) argues for framework-free grammatical theory. If grammatical theories used incompatible tools, it would be difficult to compare languages. So assuming transformations for English nonlocal dependencies and a slash mechanism for German would make comparison impossible. I agree with Haspelmath that the availability of formal tools may lead to biases, but in the end the facts have to be described somehow. If nothing is shared between theories, we end up with isolated theories formulated in one man frameworks. If there *is* shared vocabulary and if there are standards for doing framework-free grammatical theory, then the framework is framework-free grammatical theory. See Müller (2015c) and Chapter 23 of this book for further discussion.

<sup>2</sup> This term is not particularly useful as subjective factors play a role. Not everybody finds grammatical theories intuitively correct where it is assumed that every observed order in the languages of the world has to be derived from a common Specifier-Head-Complement configuration, and also only by movement to the left (see Section 4.6.1 for the discussion of such proposals).

#### 13 The innateness of linguistic knowledge

variant of acquisition theory in Mainstream Generative Grammar (MGG) is the Principles & Parameters theory, which claims that parametrized principles restrict the grammatical structures possible and children just have to set parameters during language acquisition (see Section 3.1.2).

Over the years, the innateness hypothesis, also known as nativism, has undergone a number of modifications. In particular, assumptions about exactly what forms part of the innate linguistic knowledge, so-called Universal Grammar (UG), have often been subject to change.

Nativism is often rejected by proponents of Construction Grammar, Cognitive Grammar and by many other researchers working in other theories. Other explanations are offered for the facts normally used to argue for the innateness of grammatical categories, syntactic structures or relations between linguistic objects in syntactic structures. Another point of criticism is that the actual complexity of analyses is blurred by the fact that many of the stipulations are simply assumed to be part of UG. The following is a caricature of a certain kind of argumentation in GB/Minimalism analyses:


By attributing arbitrary assumptions to UG, it is possible to keep the rest of the analysis very simple.

The following section will briefly review some of the arguments for language-specific innate knowledge. We will see that none of these arguments are uncontroversial. In the following chapters, I will discuss fundamental questions about the architecture of grammar, the distinction between competence and performance and how to model performance phenomena, the theory of language acquisition as well as other controversial questions, e.g., whether it is desirable to postulate empty elements in linguistic representations and whether language should be explained primarily based on the properties of words or rather phrasal patterns.

Before we turn to these hotly debated topics, I want to discuss the one that is most fiercely debated, namely the question of innate linguistic knowledge. In the literature, one finds the following arguments for innate knowledge:


<sup>3</sup>Also, see https://www.dailymotion.com/video/x2oh8ia. 2020-08-31.

	- **–** Williams Syndrome,
	- **–** the KE family with FoxP2 mutation and

Pinker (1994) offers a nice overview of these arguments. Tomasello (1995) provides a critical review of this book. The individual points will be discussed in what follows.

# **13.1 Syntactic universals**

The existence of syntactic universals has been taken as an argument for the innateness of linguistic knowledge (e.g., Chomsky 1998: 33; Pinker 1994: 237–238). There are varying claims in the literature with regard to what is universal and language-specific. The most prominent candidates for universals are:<sup>4</sup>


These supposed universals will each be discussed briefly in what follows. One should emphasize that there is by no means a consensus that these are universal and that the observed properties actually require postulating innate linguistic knowledge.

<sup>4</sup> Frans Plank has an archive of universals in Konstanz (Plank & Filimonova 2000): https://typo.uni-konstanz. de/archive/intro/. On 2015-12-23, it contained 2029 entries. The entries are annotated with regard to their quality, and it turns out that many of the universals are statistical universals, that is, they hold for the overwhelming majority of languages, but there are some exceptions. Some of the universals are marked as almost absolute, that is, very few exceptions are known. 1153 were marked as absolute or absolute with a question mark. 1021 of these are marked as absolute without a question mark. Many of the universals captured are implicational universals, that is, they have the form: if a language has the property X, then it also has the property Y. The universals listed in the archive are, in part, very specific and refer to the diachronic development of particular grammatical properties. For example, the fourth entry states that: *If the exponent of vocative is a prefix, then this prefix has arisen from 1st person possessor or a 2nd person subject.*

### **13.1.1 Head Directionality Parameter**

The Head Directionality Parameter was already introduced in Section 3.1.2. The examples in (7) on page 87, repeated below as (1), show that the structures in Japanese are the mirror image of the English structures:

	- b. zibun himself -no of syasin-o picture mise-te showing iru be

In order to capture these facts, a parameter was proposed that is responsible for the position of the head relative to its arguments (e.g., Chomsky 1986b: 146; 1988: 70).

By assuming a Head Directionality Parameter, Radford (1990: 60–61; 1997: 19–22), Pinker (1994: 234, 238), Baker (2003: 350) and other authors claim, either explicitly or implicitly, that there is a correlation between the direction of government of verbs and that of adpositions, that is, languages with verb-final order have postpositions and languages with VO order have prepositions. This claim can be illustrated with the language pair English/Japanese and the examples in (1): the *no* occurs after the pronoun in the prepositional phrase, the noun *syasin-o* 'picture' follows the PP belonging to it, the main verb follows its object and the auxiliary *iru* occurs after the main verb *mise-te*. The individual phrases are the exact mirror image of the respective phrases in English.

A single counterexample is enough to disprove a universal claim and in fact, it is possible to find a language that has verb-final order but nevertheless has prepositions. Persian is such a language. An example is given in (2):

	- I book-pl-râ to Sepide gave-1sg
	- 'I gave the books to Sepide.'

In Section 3.1.4, it was shown that German cannot be easily described with this parameter: German is a verb-final language but has both prepositions and postpositions. The World Atlas of Language Structures lists 41 languages with VO order and postpositions and 14 languages with OV order and prepositions (Dryer 2013b,a).<sup>5</sup> An earlier study by Dryer (1992) done with a smaller sample of languages also points out that there are exceptions to what the Head Directionality Parameter would predict.

Furthermore, Gibson & Wexler (1994: 422) point out that a single parameter for the position of heads would not be enough since complementizers in both English and German/Dutch occur before their complements; however, English is a VO language, whereas German and Dutch count as OV languages.

If one wishes to determine the direction of government based on syntactic categories (Gibson & Wexler 1994: 422, Chomsky 2005: 15), then one has to assume that the syntactic categories in question belong to the inventory of Universal Grammar (see Section 13.1.7, for more on this). Difficulties with prepositions and postpositions also arise for this kind of assumption as these are normally assigned to the same category (P). If we were to

<sup>5</sup> http://wals.info/combinations/83A\_85A#2/15.0/153.0, 2018-02-20.

introduce special categories for both prepositions and postpositions, then a four-way division of parts of speech like the one on page 94 would no longer be possible. One would instead require an additional binary feature and one would thereby automatically predict eight categories although only five (the four commonly assumed plus an extra one) are actually needed.

One can see that the relation between direction of government that Pinker formulated as a universal claim is in fact correct but rather as a tendency than as a strict rule, that is, there are many languages where there is a correlation between the use of prepositions or postpositions and the position the verb (Dryer 1992: 83).<sup>6</sup>

In many languages, adpositions have evolved from verbs. In Chinese grammar, it is commonplace to refer to a particular class of words as coverbs. These are words that can be used both as prepositions and as verbs. If we view languages historically, then we can find explanations for these tendencies that do not have to make reference to innate linguistic knowledge (see Evans & Levinson 2009a: 445).

Furthermore, it is possible to explain the correlations with reference to processing preferences: in languages with the same direction of government, the distance between the verb and the pre-/postposition is less (Figure 13.1a–b) than in languages with differing directions of government (Figure 13.1c–d). From the point of view of processing, languages with the same direction of government should be preferred since they allow the hearer to better identify the parts of the verb phrase (Newmeyer (2004a: 219–221) cites Hawkins (2004: 32) with a relevant general processing preference, see also Dryer (1992: 131)). This tendency can thus be explained as the grammaticalization of a performance preference (see Chapter 15 for the distinction between competence and performance) and recourse to innate language-specific knowledge is not necessary.

# **13.1.2 X structures**

It is often assumed that all languages have syntactic structures that correspond to the X schema (see Section 2.5) (Pinker 1994: 238; Meisel 1995: 11, 14; Pinker & Jackendoff 2005: 216). There are, however, languages such as Dyirbal (Australia) where it does not seem to make sense to assume hierarchical structure for sentences. Thus, Bresnan (2001: 110) assumes that Tagalog, Hungarian, Malayalam, Warlpiri, Jiwarli, Wambaya, Jakaltek and other corresponding languages do not have a VP node, but rather a rule taking the form of (3):

(3) S → C ∗

Here, C<sup>∗</sup> stands for an arbitrary number of constituents and there is no head in the structure. Other examples for structures without heads will be discussed in Section 21.10.

<sup>6</sup> Pinker (1994: 234) uses the word *usually* in his formulation. He thereby implies that there are exceptions and that the correlation between the ordering of adpositions and the direction of government of verbs is actually a tendency rather than a universally applicable rule. However, in the pages that follow, he argues that the Head Directionality Parameter forms part of innate linguistic knowledge. Travis (1984: 55) discusses data from Mandarin Chinese that do not correspond to the correlations she assumes. She then proposes treating the Head Directionality Parameter as a kind of Default Parameter that can be overridden by other constraints in the language.

Figure 13.1: Distance between verb and preposition for various head orders according to Newmeyer (2004a: 221)

X structure was introduced to restrict the form of possible rules. The assumption was that these restrictions reduce the class of grammars one can formulate and thus – according to the assumption – make the grammars easier to acquire. But as Kornai & Pullum (1990) have shown, the assumption of X structures does not lead to a restriction with regard to the number of possible grammars if one allows for empty heads. In GB, a number of null heads were used and in the Minimalist Program, there has been a significant increase of these. For example, the rule in (3) can be reformulated as follows:

(4) V′ → V <sup>0</sup> C ∗

Here, V<sup>0</sup> is an empty head. Since specifiers are optional, V′ can be projected to VP and we arrive at a structure corresponding to the X schema.

Apart from the problem with languages with very free constituent order, there are further problems with adjunction structures: Chomsky's analysis of adjective structure in X theory (Chomsky 1970: 210; see also Section 2.5 of this book, in particular Figure 2.8 on page 74) is not straightforwardly applicable to German since, unlike English, adjective phrases in German are head-final and degree modifiers must directly precede the adjective:

(5) a. der the auf of seinen his Sohn son sehr very stolze proud Mann man 'the man very proud of his son'


Following the X schema, *auf seinen Sohn* has to be combined with *stolze* and only then can the resulting A projection be combined with its specifier (see Figure 2.8 on page 74 for the structure of adjective phrases in English). It is therefore only possible to derive orders such as (5b) or (5c). Neither of these is possible in German. It is only possible to rescue the X schema if one assumes that German is exactly like English and, for some reason, the complements of adjectives must be moved to the left. If we allow this kind of repair approaches, then of course any language can be described using the X schema. The result would be that one would have to postulate a vast number of movement rules for many languages and this would be extremely complex and difficult to motivate from a psycholinguistic perspective. See Chapter 15 for grammars compatible with performance.

A further problem for X theory in its strictest form as presented in Section 2.5 is posed by so-called hydra clauses (Perlmutter & Ross 1970, Link 1984, Kiss 2005):

	- b. [[The boy] and [the girl]] who dated each other are friends of mine.

Since the relative clauses in (6) refer to a group of referents, they can only attach to the result of the coordination. The entire coordination is an NP, however, and adjuncts should actually be attached at the X level. The reverse case of relative clauses in German and English is posed by adjectives in Persian: Samvelian (2007) argues for an analysis where adjectives are combined with nouns directly, and only the combination of nouns and adjectives is then combined with a PP argument.

The discussion of German and English shows that the introduction of specifiers and adjuncts cannot be restricted to particular projection levels, and the preceding discussion of non-configurational languages has shown that the assumption of intermediate levels does not make sense for every language.

It should also be noted that Chomsky himself assumed in 1970 that languages can deviate from the X schema (1970: 210).

If one is willing to encode all information about combination in the lexicon, then one could get by with very abstract combinatorial rules that would hold universally. An example of this kind of combinatorial rules is the multiplication rules of Categorial Grammar (see Chapter 8) as well as Merge in the Minimalist Program (see Section 4). The rules in question simply state that two linguistic objects are combined. These kinds of combination of course exist in every language. With completely lexicalized grammars, however, it is only possible to describe languages if one allows for null heads and makes certain ad hoc assumptions. This will be discussed in Section 21.10.

## **13.1.3 Grammatical functions such as subject and object**

Bresnan & Kaplan (1982: xxv), Pinker (1994: 236–237), Baker (2003: 349) and others assume that all languages have subjects and objects. In order to determine what exactly this claim means, we have to explore the terms themselves. For most European languages, it is easy to say what a subject and an object is (see Section 1.7); however, it has been argued that it is not possible for all languages or that it does not make sense to use these terms at all (Croft 2001: Chapter 4; Evans & Levinson 2009a: Section 4).

In theories such as LFG – the one in which Pinker worked – grammatical functions play a primary role. The fact that it is still controversial whether one should view sentences as subjects, objects or as specially defined sentential arguments (xcomp) (Dalrymple & Lødrup 2000, Berman 2003b, 2007, Alsina, Mohanan & Mohanan 2005, Forst 2006) serves to show that there is at least some leeway for argumentation when it comes to assigning grammatical functions to arguments. It is therefore likely that one can find an assignment of grammatical functions to the arguments of a functor in all languages.

Unlike LFG, grammatical functions are irrelevant in GB (see Williams 1984, Sternefeld 1985a) and Categorial Grammar. In GB, grammatical functions can only be determined indirectly by making reference to tree positions. Thus, in the approach discussed in Chapter 3, the subject is the phrase in the specifier position of IP.

In later versions of Chomskyan linguistics, there are functional nodes that seem to correspond to grammatical functions (AgrS, AgrO, AgrIO, see page 146). However, Chomsky (1995b: Section 4.10.1) remarks that these functional categories were only assumed for theory internal reasons and should be removed from the inventory of categories that are assumed to be part of UG. See Haider (1997a) and Sternefeld (2006: 509–510) for a description of German that does without functional projections that cannot be motivated in the language in question.

The position taken by HPSG is somewhere in the middle: a special valence feature is used for subjects (in grammars of German, there is a head feature that contains a representation of the subject for non-finite verb forms). However, the value of the subj feature is derived from more general theoretical considerations: in German, the least oblique element with structural case is the subject (Müller 2002a: 153; 2007a: 311).

In GB theory (Extended Projection Principle, EPP, Chomsky (1982: 10)) and also in LFG (Subject Condition), there are principles ensuring that every sentence must have a subject. It is usually assumed that these principles hold universally.<sup>7</sup>

As previously mentioned, there are no grammatical functions in GB, but there are structural positions that correspond to grammatical functions. The position corresponding to the subject is the specifier of IP. The EPP states that there must be an element in SpecIP. If we assume universality of this principle, then every language must have an element in this position. As we have already seen, there is a counterexample to this universal claim: German. German has an impersonal passive (7a) and there are also subjectless verbs (7b,c) and adjectives (7d–f).<sup>8</sup>

<sup>7</sup>However, Chomsky (1981a: 27) allows for languages not to have a subject. He assumes that this is handled by a parameter. Bresnan (2001: 311) formulates the Subject Condition, but mentions in a footnote that it might be necessary to parameterize this condition so that it only holds for certain languages.

<sup>8</sup> For further discussion of subjectless verbs in German, see Haider (1993: Sections 6.2.1, 6.5), Fanselow (2000b), Nerbonne (1986b: 912) and Müller (2007a: Section 3.2).

	- b. Ihm him.dat graut dreads vor before der the Prüfung. exam 'He dreads the exam.'
	- c. Mich me.acc friert. freezes 'I am freezing.'
	- d. weil because schulfrei school.free ist is 'because there is no school today'
	- e. weil because ihm him.dat schlecht ill ist is 'because he is not feeling well'
	- f. Für for dich you ist is immer always offen.<sup>9</sup> open 'We are always open for you.'

Most of the predicates that can be used without subjects can also be used with an expletive subject. An example is given in (8):

(8) dass that es expl ihm him vor before der the Prüfung exam graut dreads 'He dreads the exam.'

However, there are verbs such as *liegen* 'lie' in example (9a) from Reis (1982: 185) that cannot occur with an *es* 'it'.

	- b. \* Mir me.dat liegt lies es it an on diesem this Plan. plan

Nevertheless, the applicability of the EPP and the Subject Condition is sometimes also assumed for German. Grewendorf (1995: 1311) assumes that there is an empty expletive that fills the subject position of subjectless constructions.

Berman (1999: 11; 2003a: Chapter 4), working in LFG, assumes that verbal morphology can fulfill the subject role in German and therefore even in sentences where no subject is overtly present, the position for the subject is filled in the f-structure. A constraint stating that all f-structures without a pred value must be third person singular applies to the f-structure of the unexpressed subject. The agreement information in the finite

<sup>9</sup>Haider (1986a: 18).

#### 13 The innateness of linguistic knowledge

verb has to match the information in the f-structure of the unexpressed subject and hence the verbal inflection in subjectless constructions is restricted to be 3rd person singular (Berman 1999).

As we saw on page 166, some researchers working in the Minimalist Program even assume that there is an object in every sentence (Stabler quoted in Veenstra (1998: 61, 124)). Objects of monovalent verbs are assumed to be empty elements.

If we allow these kinds of tools, then it is of course easy to maintain the existence of many universals: we claim that a language X has the property Y and then assume that the structural items are invisible and have no meaning. These analyses can only be justified theory-internally with the goal of uniformity (see Culicover & Jackendoff 2005: Section 2.1.2).<sup>10</sup>

### **13.1.4 Binding principles**

The principles governing the binding of pronouns are also assumed to be part of UG (Chomsky 1998: 33; Crain, Thornton & Khlentzos 2009: 146; Rizzi 2009b: 468). Binding Theory in GB theory has three principles: principle A states that reflexives such as *sich* or *himself* refer to an element (antecedent) inside of a certain local domain (binding domain). Simplyfying a bit, one could say that a reflexive has to refer to a co-argument.

(10) Klaus Klaus sagt, says dass that Peter Peter sich∗/ himself rasiert shaved hat. has

Principle B holds for personal pronouns and states that these cannot refer to elements inside of their binding domain.

(11) Klaus Klaus sagt, says dass that Peter Peter ihn/∗ him rasiert shaved hat. has

Principle C determines what referential expressions can refer to. According to Principle C, an expression A<sup>1</sup> cannot refer to another expression A<sup>2</sup> if A<sup>2</sup> c-commands A<sup>1</sup> . ccommand is defined with reference to the structure of the utterance. There are various definitions of c-command; a simple version states that A c-commands B if there is a path in the constituent structure that goes upwards from A to the next branching node and then only downwards to B.

For the example in (12a), this means that *Max* and *er* 'he' cannot refer to the same individual since *er* c-commands *Max*.

	- b. Max Max sagt, said dass that er he Brause soda getrunken drunk hat. has 'Max said that he drank soda.'

<sup>10</sup>For arguments from language acquisition, see Chapter 16.

c. Als as er he hereinkam, came.in trank drank Max Max Brause. soda 'As he came in, Max was drinking soda.'

This is possible in (12b), however, as there is no such c-command relation. For *er* 'he', it must only be the case that it does not refer to another argument of the verb *getrunken* 'drunk' and this is indeed the case in (12b). Similarly, there is no c-command relation between *er* 'he' and *Max* in (12c) since the pronoun *er* is inside a complex structure. *er* 'he' and *Max* can therefore refer to the the same or different individuals in (12b) and (12c).

Crain, Thornton & Khlentzos (2009: 147) point out that (12b,c) and the corresponding English examples are ambiguous, whereas (12a) is not, due to Principle C. This means that one reading is not available. In order to acquire the correct binding principles, the learner would need information about which meanings expressions do not have. The authors note that children already master Principle C at age three and they conclude from this that Principle C is a plausible candidate for innate linguistic knowledge. (This is a classic kind of argumentation. For Poverty of the Stimulus arguments, see Section 13.8 and for more on negative evidence, see Section 13.8.4).

Evans & Levinson (2009b: 483) note that Principle C is a strong cross-linguistic tendency but it nevertheless has some exceptions. As an example, they mention both reciprocal expressions in Abaza, where affixes that correspond to *each other* occur in subject position rather than object position as well as Guugu Yimidhirr, where pronouns in a superordinate clause can be coreferent with full NPs in a subordinate clause.

Furthermore, Fanselow (1992b: 351) refers to the examples in (13) that show that Principle C is a poor candidate for a syntactic principle.

	- b. Ein gutes Gespräch hilft Probleme überwinden.
		- a good conversation helps problems overcome

'A good conversation helps to overcome problems.'

(13a) expresses that it is a crime when somebody kills someone else, and (13b) refers to conversations with another person rather than talking to oneself. In these sentences, the nominalizations *Mord* 'murder' and *Gespräch* 'conversation' are used without any arguments of the original verbs. So there aren't any arguments that stand in a syntactic command relation to one another. Nevertheless the arguments of the nominalized verbs cannot be coreferential. Therefore it seems that there is a principle at work that says that the argument slots of a predicate must be interpreted as non-coreferential as long as the identity of the arguments is not explicitly expressed by linguistic means.

In sum, one can say that there are still a number of unsolved problems with Binding Theory. The HPSG variants of Principles A–C in English cannot even be applied to German (Müller 1999b: Chapter 20). Working in LFG, Dalrymple (1993) proposes a variant

of Binding Theory where the binding properties of pronominal expressions are determined in the lexicon. In this way, the language-specific properties of pronouns can be accounted for.

## **13.1.5 Properties of long-distance dependencies**

The long-distance dependencies discussed in the preceding chapters are subject to some kind of restrictions. For example, nothing can be extracted out of sentences that are part of a noun phrase in English. Ross (1967: 70) calls the relevant constraint the *Complex NP Constraint*. In later work, the attempt was made to group this, and other constraints such as the *Right Roof Constraint* also formulated by Ross (1967: Section 5.1.2), into a single, more general constraint, namely the Subjacency Principle (Chomsky 1973: 271; 1986a: 40; Baltin 1981, 2006). Subjacency was assumed to hold universally. The Subjacency Constraint states that movement operations can cross at most one bounding node, whereby what exactly counts as a bounding node depends on the language in question (Baltin 1981: 262; 2006; Rizzi 1982: 57; Chomsky 1986a: 38–40).<sup>11</sup>

Currently, there are varying opinions in the GB/Minimalism tradition with regard to the question of whether subjacency should be considered as part of innate linguistic knowledge. Hauser, Chomsky & Fitch (2002) assume that subjacency does not form part of language-specific abilities, at least not in the strictest sense, but rather is a linguistically relevant constraint in the broader sense that the constraints in question can be derived from more general cognitive ones (see p. 469). Since subjacency still plays a role as a UG principle in other contemporary works (Newmeyer 2005: 15, 74–75; 2004a: 184; Baltin 200612; Baker 2009; Freidin 2009; Rizzi 2009b,a), the Subjacency Principle will be discussed here in some further detail.

It is possible to distinguish two types of movement: movement to the left (normally called extraction) and movement to the right (normally referred to as extraposition). Both movement types constitute long-distance dependencies. In the following section, I will discuss some of the restrictions on extraposition. Extraction will be discussed in Section 13.1.5.2 following it.

#### **13.1.5.1 Extraposition**

Baltin (1981) and Chomsky (1986a: 40) claim that the extraposed relative clauses in (14) have to be interpreted with reference to the embedding NP, that is, the sentences are not

<sup>11</sup>Newmeyer (2004b: 539–540) points out a conceptual problem following from the language-specific determination of bounding nodes: it is argued that subjacency is an innate language-specific principle since it is so abstract that it is impossible for speakers to learn it. However, if parameterization requires that a speaker chooses from a set of categories in the linguistic input, then the corresponding constraints must be derivable from the input at least to the degree that it is possible to determine the categories involved. This raises the question as to whether the original claim of the impossibility of acquisition is actually justified. See Section 13.8 on the *Poverty of the Stimulus* and Section 16.1 on parameter-based theories of language acquisition.

Note also that a parameter that has as the value a part of speech requires the respective part of speech values to be part of UG.

<sup>12</sup>However, see Baltin (2004: 552).

equivalent to those where the relative clause would occur in the position marked with t, but rather they correspond to examples where it would occur in the position of the t′ .

	- b. [NP Many proofs [PP of [the theorem t]] t′ ] appeared [that I wanted to think about].

Here, it is assumed that NP, PP, VP and AP are bounding nodes for rightward movement (at least in English) and the interpretation in question here is thereby ruled out by the Subjacency Principle (Baltin 1981: 262).

If we construct a German example parallel to (14a) and replace the embedding noun so that it is ruled out or dispreferred as a referent, then we arrive at (15):

(15) weil because viele many Schallplatten records mit with Geschichten stories verkauft sold wurden, were die that ich I noch still lesen read wollte wanted 'because many records with stories were sold that I wanted to read'

This sentence can be uttered in a situation where somebody in a record store sees particular records and remembers that he had wanted to read the fairy tales on those records. Since one does not read records, adjunction to the superordinate noun is implausible and thus adjunction to *Geschichten* 'stories' is preferred. By carefully choosing the nouns, it is possible to construct examples such as (16) that show that extraposition can take place across multiple NP nodes:<sup>13</sup>

(16) a. Karl Karl hat has mir me [ein a Bild picture [einer a Frau woman \_ ]] gegeben, given [die that schon part lange long tot dead ist] .

is

'Karl gave me a picture of a woman that has been dead some time.'

b. Karl Karl hat has mir me [eine a Fälschung forgery [des of.the Bildes picture [einer of.a Frau woman \_ ]]] gegeben, given [die that schon part lange long tot dead ist] . is

'Karl gave me a forgery of the picture of a woman that has been dead for some time.'

c. Karl Karl hat has mir me [eine a Kopie copy [einer of.a Fälschung forgery [des of.the Bildes picture [einer of.a Frau woman \_ ]]]] gegeben, [die schon lange tot ist] .

given that part long dead is

'Karl gave me a copy of a forgery of the picture of a woman that has been dead for some time.'

<sup>13</sup>See Müller (1999b: 211) and Müller (2004c; 2007c: Section 3). For parallel examples from Dutch, see Koster (1978: 52).

#### 13 The innateness of linguistic knowledge

This kind of embedding could continue further if one were to not eventually run out of nouns that allow for semantically plausible embedding. NP is viewed as a bounding node in German (Grewendorf 1988: 81; 2002: 17–18; Haider 2001: 285). These examples show that it is possible for rightward extraposed relative clauses to cross any number of bounding nodes.

Koster (1978: 52–54) discusses some possible explanations for the data in (16), where it is assumed that relative clauses move to the NP/PP border and are then moved on further from there (this movement requires so-called escape hatches or escape routes). He argues that these approaches will also work for the very sentences that should be ruled out by subjacency, that is, for examples such as (14). This means that either data such as (14) can be explained by subjacency and the sentences in (16) are counterexamples, or there are escape hatches and the examples in (14) are irrelevant, deviant sentences that cannot be explained by subjacency.

In the examples in (16), a relative clause was extraposed in each case. These relative clauses are treated as adjuncts and there are analyses that assume that extraposed adjuncts are not moved but rather base-generated in their position, and coreference/ coindexation is achieved by special mechanisms (Kiss 2005). For proponents of these kinds of analyses, the examples in (16) would be irrelevant to the subjacency discussion as the Subjacency Principle only constrains movement. However, extraposition across phrase boundaries is not limited to relative clauses; sentential complements can also be extraposed:

(17) a. Ich I habe have [von from [der the Vermutung conjecture \_ ]] gehört, heard [dass that es expl Zahlen numbers gibt, gives die that die the folgenden following Bedingungen requirements erfüllen] . fulfill

> 'I have heard of the conjecture that there are numbers that fulfill the following requirements.'


'I have heard of the attempt to prove the conjecture that there are numbers that fulfill the following requirements.'

Since there are nouns that select *zu* infinitives or prepositional phrases and since these can be extraposed like the sentences above, it must be ensured that the syntactic category of the postposed element corresponds to the category required by the noun. This means that there has to be some kind of relation between the governing noun and the extraposed element. For this reason, the examples in (17) have to be analyzed as instances of extraposition and provide counter evidence to the claims discussed above.

If one wishes to discuss the possibility of recursive embedding, then one is forced to refer to constructed examples as the likelihood of stumbling across groups of sentences such as those in (16) and (17) is very remote. It is, however, possible to find some individual cases of deep embedding: (18) gives some examples of relative clause extraposition and complement extraposition taken from the Tiger corpus<sup>14</sup> (Müller 2007c: 78–79; Meurers & Müller 2009: Section 2.1).

(18) a. Der the 43jährige 43.year.old will wants nach after eigener own Darstellung depiction damit there.with [NP den the Weg way [PP für for [NP eine a Diskussion discussion [PP über about [NP den the künftigen future Kurs course [NP der of.the stärksten strongest Oppositionsgruppierung]]]]]] opposition.group freimachen, free.make [die that aber however mit with 10,4 10.4 Prozent percent der of.the Stimmen votes bei at der the Wahl election im in Oktober October weit far hinter behind den the Erwartungen expectations zurückgeblieben stayed.back war]. was (s27639)

> 'In his own words, the 43-year old wanted to clear the way for a discussion about the future course of the strongest opposition group that had, however, performed well below expectations gaining only 10.4 percent of the votes at the election in October.'

b. […] die the Erfindung invention der of.the Guillotine guillotine könnte could [NP die the Folge result [NP eines of.a verzweifelten desperate Versuches attempt des the gleichnamigen same.name Doktors] doctor gewesen have sein, been [seine his Patienten patients ein once für for allemal all.time von of Kopfschmerzen headaches infolge because.of schlechter bad Kissen pillows zu befreien]. (s16977)

to free

'The invention of the guillotine could have been the result of a desperate attempt of the eponymous doctor to rid his patients once and for all of headaches from bad pillows.'

It is also possible to construct sentences for English that violate the Subjacency Condition. Uszkoreit (1990: 2333) provides the following example:

(19) [NP Only letters [PP from [NP those people \_ ]]] remained unanswered [that had received our earlier reply] .

<sup>14</sup>See Brants et al. (2004) for more information on the Tiger corpus.

Jan Strunk (p. c. 2008) has found examples for extraposition of both restrictive and nonrestrictive relative clauses across multiple phrase boundaries:

	- b. I picked up [NP a copy of [NP a book \_ ]] today, by a law professor, about law, [that is not assigned or in any way required to read] . 16
	- c. We drafted [NP a list of [NP basic demands \_ ]] that night [that had to be unconditionally met or we would stop making and delivering pizza and go on strike] . 17

(20a) is also published in Strunk & Snider (2013: 111). Further attested examples from German and English can be found in this paper.

The preceding discussion has shown that subjacency constraints on rightward movement do not hold for English or German and thus cannot be viewed as universal. One could simply claim that NP and PP are not bounding nodes in English or German. Then, these extraposition data would no longer be problematic for theories assuming subjacency. However, subjacency constraints are also assumed for leftward movement. This is discussed in more detail in the following section.

### **13.1.5.2 Extraction**

Under certain conditions, leftward movement is not possible from certain constituents (Ross 1967). These constituents are referred to as islands for extraction. Ross (1967: Section 4.1) formulated the *Complex NP Constraint* (CNPC) that states that extraction is not possible from complex noun phrases. An example of extraction from a relative clause inside a noun phrase is the following:

(21) \* Who did he just read [NP the report [<sup>S</sup> that was about \_ ]?

Although (21) would be a semantically plausible question, the sentence is still ungrammatical. This is explained by the fact that the question pronoun has been extracted across the sentence boundary of a relative clause and then across the NP boundary and has therefore crossed two bounding nodes. It is assumed that the CNPC holds for all languages. This is not the case, however, as the corresponding structures are possible in Danish (Erteschik-Shir & Lappin 1979: 55), Norwegian, Swedish, Japanese, Korean, Tamil and Akan (see Hawkins (1999: 245, 262) and references therein). Since the restrictions of the CNPC are integrated into the Subjacency Principle, it follows that the Subjacency Principle cannot be universally applicable unless one claims that NP is not a bounding node in the problematic languages. However, it seems indeed to be the case that the majority of languages do not allow extraction from complex noun phrases. Hawkins

<sup>15</sup>http://www.publications.parliament.uk/pa/cm199899/cmselect/cmenvtra/32ii/32115.htm, 2018-02-20.

<sup>16</sup>http://greyhame.org/archives/date/2005/09/, 2008-09-27.

<sup>17</sup>http://portland.indymedia.org/en/2005/07/321809.shtml, 2018-02-20.

explains this on the basis of the processing difficulties associated with the structures in question (Section 4.1). He explains the difference between languages that allow this kind of extraction and languages that do not with reference to the differing processing load for structures that stem from the interaction of extraction with other grammatical properties such as verb position and other conventionalized grammatical structures in the respective languages (Section 4.2).

Unlike extraction from complex noun phrases, extraction across a single sentence boundary (22) is not ruled out by the Subjacency Principle.

(22) Who did she think that he saw \_ ?

Movement across multiple sentence boundaries, as discussed in previous chapters, is explained by so-called cyclic movement in transformational theories: a question pronoun is moved to a specifier position and can then be moved further to the next highest specifier. Each of these movement steps is subject to the Subjacency Principle. The Subjacency Principle rules out long-distance movement in one fell swoop.

The Subjacency Principle cannot explain why extraction from sentences embedded under verbs that specify the kind of utterance (23a) or factive verbs (23b) is deviant (Erteschik-Shir & Lappin 1979: 68–69).

(23) a. ?? Who did she mumble that he saw \_ ?

> b. ?? Who did she realize that he saw \_ ?

The structure of these sentences seems to be the same as (22). In entirely syntactic approaches, it was also attempted to explain these differences as subjacency violations or as a violation of Ross' constraints. It has therefore been assumed (Stowell 1981: 401– 402) that the sentences in (23) have a structure different from those in (22). Stowell treats these sentential arguments of manner of speaking verbs as adjuncts. Since adjunct clauses are islands for extraction by assumption, this would explain why (23a) is marked. The adjunct analysis is compatible with the fact that these sentential arguments can be omitted:

	- b. She shouted.

Ambridge & Goldberg (2008: 352) have pointed out that treating such clauses as adjuncts is not justified as they are only possible with a very restricted class of verbs, namely verbs of saying and thinking. This property is a property of arguments and not of adjuncts. Adjuncts such as place modifiers are possible with a wide number of verb classes. Furthermore, the meaning changes if the sentential argument is omitted as in (24b): whereas (24a) requires that some information is communicated, this does not have to be the case with (24b). It is also possible to replace the sentential argument with an NP as in (25), which one would certainly not want to treat as an adjunct.

(25) She shouted the remark/the question/something I could not understand.

The possibility of classifying sentential arguments as adjuncts cannot be extended to factive verbs as their sentential argument is not optional (Ambridge & Goldberg 2008: 352):

	- b. ?? She realized.

Kiparsky & Kiparsky (1970) suggest an analysis of factive verbs that assumes a complex noun phrase with a nominal head. An optional *fact* Deletion-Transformation removes the head noun and the determiner of the NP in sentences such as (27a) to derive sentences such as (27b) (page 159).

	- b. She realized [NP [<sup>S</sup> that he left]].

The impossibility of extraction out of such sentences can be explained by assuming that two boundary nodes were crossed, which was assumed to be impossible (on the island status of this construction, see Kiparsky & Kiparsky 1970: Section 4). This analysis predicts that extraction from complement clauses of factive verbs should be just as bad as extraction from overt NP arguments since the structure for both is the same. According to Ambridge & Goldberg (2008: 353), this is, however, not the case:

	- b. ?? Who did she realize that he saw \_ ?

Together with Erteschik-Shir (1981), Erteschik-Shir & Lappin (1979), Takami (1988) and Van Valin (1998), Goldberg (2006: Section 7.2) assumes that the gap must be in a part of the utterance that can potentially form the focus of an utterance (see Cook (2001), De Kuthy (2002) and Fanselow (2003c) for German). This means that this part must not be presupposed.<sup>18</sup> If one considers what this means for the data from the subjacency discussion, then one notices that in each case extraction has taken place out of presupposed material:

(29) a. Complex NP

She didn't see the report that was about him. → The report was about him.


	- b. The King of France is not bald.

<sup>18</sup>Information is presupposed if it is true regardless of whether the utterance is negated or not. Thus, it follows from both (i.a) and (i.b) that there is a king of France.

Goldberg assumes that constituents that belong to backgrounded information are islands (*Backgrounded constructions are islands* (BCI)). Ambridge & Goldberg (2008) have tested this semantic/pragmatic analysis experimentally and compared it to a purely syntactic approach. They were able to confirm that information structural properties play a significant role for the extractability of elements. Along with Erteschik-Shir (1973: Section 3.H), Ambridge & Goldberg (2008: 375) assume that languages differ with regard to how much constituents have to belong to background knowledge in order to rule out extraction. In any case we should not rule out extraction from adjuncts for all languages as there are languages such as Danish where it is possible to extract from relative clauses.<sup>19</sup> Erteschik-Shir (1973: 61) provides the following examples, among others:

	- b. Det that hus house kender know jeg I en a mand man [som that har has købt bought \_ ]. 'I know a man that has bought that house.' (lit.: 'This house, I know a man that has bought.')

And as the following example from McCawley (1981: 108) shows, extraction out of relative clauses is possible even in English:

(31) Then you look at what happens in languages you know and languages that you have a friend [who knows \_ ]. (Charles Ferguson, lecture at university of Chicago, 1971)

Rizzi's parameterization of the subjacency restriction has been abandoned in many works, and the relevant effects have been ascribed to differences in other areas of grammar (Adams 1984, Chung & McCloskey 1983, Grimshaw 1986, Kluender 1992).

We have seen in this subsection that there are reasons other than syntactic properties of structure as to why leftward movement might be blocked. In addition to information structural properties, processing considerations also play a role (Grosu 1973, Ellefson & Christiansen 2000, Gibson 1998, Kluender & Kutas 1993, Hawkins 1999, Sag, Hofmeister & Snider 2007). The length of constituents involved, the distance between filler and gap, definiteness, complexity of syntactic structure and interference effects between similar discourse referents in the space between the filler and gap are all important factors for the acceptability of utterances. Since languages differ with regard to their syntactic structure, varying effects of performance, such as the ones found for extraposition and extraction, are to be expected.

<sup>19</sup>Discussing the question of whether UG-based approaches are falsifiable, Crain, Khlentzos & Thornton (2010: 2669) claim that it is not possible to extract from relative clauses and the existence of such languages would call into question the very concept of UG. ("If a child acquiring any language could learn to extract linguistic expressions from a relative clause, then this would seriously cast doubt on one of the basic tenets of UG.") They thereby contradict Evans and Levinson as well as Tomasello, who claim that UG approaches are not falsifiable. If the argumentation of Crain, Khlentzos and Thornton were correct, then (30) and (31) would falsify UG and that would be the end of the discussion.

In sum, we can say that subjacency constraints do not hold for extraposition in either German or English and furthermore that one can better explain constraints on extraction with reference to information structure and processing phenomena than with the Subjacency Principle. Assuming subjacency as a syntactic constraint in a universal competence grammar is therefore unnecessary to explain the facts.

### **13.1.6 Grammatical morphemes for tense, mood and aspect**

Pinker (1994: 238) is correct in claiming that there are morphemes for tense, mood, aspect, case and negation in many languages. However, there is a great deal of variation with regard to which of these grammatical properties a language has and how they are expressed.

For examples of differences in the tense system see Dahl & Velupillai (2013a,b). Mandarin Chinese is a clear case: it has next to no morphology. The fact that the same morphemes occur in one form or another in almost every language can be attributed to the fact that certain things need to be expressed repeatedly and then things which are constantly repeated become grammaticalized.

### **13.1.7 Parts of speech**

In Section 4.6, so-called cartographic approaches were mentioned, some of which assume over thirty functional categories (see Table 4.1 on page 147 for Cinque's functional heads) and assume that these categories form part of UG together with corresponding fixed syntactic structures. Cinque & Rizzi (2010: 55, 57) even assume over 400 functional categories that are claimed to play a role in the grammars of all languages.<sup>20</sup> Also, specific parts of speech such as Infl (inflection) and Comp (complementizer) are referred to when formulating principles that are assumed to be universal (Baltin 1981: 262; 2006; Rizzi 1982; Chomsky 1986a: 38; Hornstein 2013: 397).

Chomsky (1988: 68; 1991; 1995b: 131), Pinker (1994: 284, 286), Briscoe (2000: 270) and Wunderlich (2004: 621) make comparatively fewer assumptions about the innate inventory of parts of speech: Chomsky assumes that all lexical categories (verbs, nouns, adjectives and adpositions) belong to UG and languages have these at their disposal. Pinker, Briscoe and Wunderlich assume that all languages have nouns and verbs. Again critics of UG raised the question as to whether these syntactic categories can be found in other languages in the form known to us from languages such as German and English.

Braine (1987: 72) argues that parts of speech such as verb and noun should be viewed as derived from fundamental concepts like argument and predicate (see also Wunderlich (2008: 257)). This means that there is an independent explanation for the presence of these categories that is not based on innate language-specific knowledge.

Evans & Levinson (2009a: Section 2.2.4) discuss the typological literature and give examples of languages which lack adverbs and adjectives. The authors cite Straits Salish as a language in which there may be no difference between verbs and nouns (see also Evans & Levinson 2009b: 481). They remark that it does make sense to assume the additional

<sup>20</sup>The question of whether these categories form part of UG is left open.

word classes ideophone, positional, coverb, classifier for the analysis of non Indo-European languages on top of the four or five normally used.<sup>21</sup> This situation is not a problem for UG-based theories if one assumes that languages can choose from an inventory of possibilities (a toolkit) but do not have to exhaust it (Jackendoff 2002: 263; Newmeyer 2005: 11; Fitch, Hauser & Chomsky 2005: 204; Chomsky 2007: 6–7; Cinque & Rizzi 2010: 55, 58, 65). However, if we condone this view, then there is a certain arbitrariness. It is possible to assume any parts of speech that one requires for the analysis of at least one language, attribute them to UG and then claim that most (or maybe even all) languages do not make use of the entire set of parts of speech. This is what is suggested by Villavicencio (2002: 157), working in the framework of Categorial Grammar, for the categories S, NP, N, PP and PRT. This kind of assumption is not falsifiable (see van Riemsdijk 1978: 148; Evans & Levinson 2009a: 436; Tomasello 2009: 471 for a discussion of similar cases and a more general discussion).

Whereas Evans and Levinson assume that one needs additional categories, Haspelmath (2009: 458) and Croft (2009: 453) go so far as to deny the existence of cross-linguistic parts of speech. I consider this to be too extreme and believe that a better research strategy is to try and find commonalities between languages.<sup>22</sup> One should, however, expect to find languages that do not fit into our Indo-European-biased conceptions of grammar.

### **13.1.8 Recursion and infinitude**

In an article in *Science*, Hauser, Chomsky & Fitch (2002) put forward the hypothesis that the only domain-specific universal is recursion, "providing the capacity to generate an infinite range of expressions from a finite set of elements" (see (37b) on page 64 for an example of a recursive phrase structure rule).<sup>23</sup> This assumption is controversial and there have been both formal and empirical objections to it.

#### **13.1.8.1 Formal problems**

The claim that our linguistic capabilities are infinite is widespread and can already be found in Humboldt's work:<sup>24</sup>

<sup>21</sup>For the opposite view, see Pinker & Jackendoff (2009: 465).

<sup>22</sup>Compare Chomsky (1999: 2): "In the absence of compelling evidence to the contrary, assume languages to be uniform, with variety restricted to easily detectable properties of utterances."

<sup>23</sup>In a discussion article in *Cognition*, Fitch, Hauser & Chomsky (2005) clarify that their claim that recursion is the only language-specific and human-specific property is a hypothesis and it could be the case that are not any language-specific/species-specific properties at all. Then, a particular combination of abilities and properties would be specific to humans (p. 182–201). An alternative they consider is that innate language-specific knowledge has a complexity corresponding to what was assumed in earlier versions of Mainstream Generative Grammar (p. 182). Chomsky (2007: 7) notes that Merge could be a non languagespecific operation but still attributes it to UG.

<sup>24</sup>The process of language is not simply one where an individual instantiation is created; at the same time it must allow for an indefinite set of such instantiations and must above all allow the expression of the conditions imposed by thought. Language faces an infinite and truly unbounded subject matter, the epitome of everything one can think of. Therefore, it must make infinite use of finite means and this is possible through the identity of the power that is responsible for the production of thought and language.

Das Verfahren der Sprache ist aber nicht bloß ein solches, wodurch eine einzelne Erscheinung zustande kommt; es muss derselben zugleich die Möglichkeit eröffnen, eine unbestimmbare Menge solcher Erscheinungen und unter allen, ihr von dem Gedanken gestellten Bedingungen hervorzubringen. Denn sie steht ganz eigentlich einem unendlichen und wahrhaft grenzenlosen Gebiete, dem Inbegriff alles Denkbaren gegenüber. Sie muss daher von endlichen Mitteln einen unendlichen Gebrauch machen, und vermag dies durch die Identität der gedanken- und spracheerzeugenden Kraft. (Humboldt 1988: 108)

If we just look at the data, we can see that there is an upper bound for the length of utterances. This has to do with the fact that extremely long instances cannot be processed and that speakers have to sleep or will eventually die at some point. If we set a generous maximal sentence length at 100,000 morphemes and then assume a morpheme inventory of X then one can form less than X100,<sup>000</sup> utterances. We arrive at the number X <sup>100</sup>,<sup>000</sup> if we use each of the morphemes at each of the 100,000 positions. Since not all of these sequences will be well-formed, then there are actually less than X100,<sup>000</sup> possible utterances (see also Weydt 1972 for a similar but more elaborate argument). This number is incredibly large, but still finite. The same is true of thought: we do not have infinitely many possible thoughts (if *infinitely* is used in the mathematical sense of the word), despite claims by Humboldt and Chomsky (2008: 137) to the contrary.<sup>25</sup>

In the literature, one sometimes finds the claim that it is possible to produce infinitely long sentences (see for instance Nowak, Komarova & Niyogi (2001: 117) and Kim & Sells

(i) T → T U

U can be a sentence or another phrase that can be part of a text. If one is ready to admit that there is no upper bound on the length of texts, it follows that there cannot be an upper bound on the length of sentences either, since one can construct long sentences by joining all phrases of a text with *and*. Such long sentences that are the product of conjoining short sentences are different in nature from very long sentences that are admitted under the Chomskyan view in that they do not include center-self embeddings of an arbitrary depth (see Section 15), but nevertheless the number of sentences that can be produced from arbitrarily long texts is infinite.

<sup>25</sup>Weydt (1972) discusses Chomsky's statements regarding the existence of infinitely many sentences and whether it is legitimate for Chomsky to refer to Humboldt. Chomsky's quote in *Current Issues in Linguistic Theory* (Chomsky 1964a: 17) leaves out the sentence *Denn sie steht ganz eigentlich einem unendlichen und wahrhaft grenzenlosen Gebiete, dem Inbegriff alles Denkbaren gegenüber.* Weydt (1972: 266) argues that Humboldt, Bühler and Martinet claimed that there are infinitely many thoughts that can be expressed. Weydt claims that it does not follow that sentences may be arbitrarily long. Instead he suggests that there is no upper bound on the length of texts. This claim is interesting, but I guess texts are just the next bigger unit and the argument that Weydt put forward against languages without an upper bound for sentence length also applies to texts. A text can be generated by the rather simplified rule in (i) that combines an utterance U with a text T resulting in a larger text T:

As for arbitrarily long texts there is an interesting problem: Let us assume that a person produces sentences and keeps adding them to an existing text. This enterprise will be interrupted when the human being dies. One could say that another person could take up the text extension until this one dies and so on. Again the question is whether one can understand the meaning and the structure of a text that is several million pages long. 42. If this is not enough of a problem, one may ask oneself whether the language of the person who keeps adding to the text in the year 2731 is still the same that the person who started the text spoke in 2015. If the answer to this question is no, then the text is not a document containing sentences from one language L but a mix from several languages and hence irrelevant for the debate.

(2008: 3) and Dan Everett in O'Neill & Wood (2012) at 25:19). This is most certainly not the case. It is also not the case that the rewrite grammars we encountered in Chapter 2 allow for the creation of infinite sentences as the set of symbols of the right-hand side of the rule has to be finite by definition. While it is possible to derive an infinite number of sentences, the sentences themselves cannot be infinite, since it is always one symbol that is replaced by finitely many other symbols and hence no infinite symbol sequence may result.

Chomsky (1965: Section I.1) follows de Saussure (1916) and draws a distinction between competence and performance: competence is the knowledge about what kind of linguistic structures are well-formed, and performance is the application of this knowledge (see Section 12.6.3 and Chapter 15). Our restricted brain capacity as well as other constraints are responsible for the fact that we cannot deal with an arbitrary amount of embedding and that we cannot produce utterances longer than 100,000 morphemes. The separation between competence and performance makes sense and allows us to formulate rules for the analysis of sentences such as (32):

	- b. Karl suspects that Richard is sleeping.
	- c. Otto claims that Karl suspects that Richard is sleeping.
	- d. Julius believes that Otto claims that Karl suspects that Richard is sleeping.
	- e. Max knows that Julius believes that Otto claims that Karl suspects that Richard is sleeping.

The rule takes the following form: combine a noun phrase with a verb of a certain class and a clause. By applying this rule successively, it is possible to form strings of arbitrary length. Pullum & Scholz (2010) point out that one has to keep two things apart: the question of whether language is a recursive system and whether it is just the case that the best models that we can devise for a particular language happen to be recursive. For more on this point and on processing in the brain, see Luuk & Luuk (2011). When constructing strings of words using the system above, it cannot be shown that (a particular) language is infinite, even if this is often claimed to be the case (Bierwisch 1966: 105–106; Pinker 1994: 86; Hauser, Chomsky & Fitch 2002: 1571; Müller 2007a: 1; Hornstein, Nunes & Grohmann 2005: 7; Kim & Sells 2008: 3).

The "proof" of this infinitude of language is led as an indirect proof parallel to the proof that shows that there is no largest natural number (Bierwisch 1966: 105–106; Pinker 1994: 86). In the domain of natural numbers, this works as follows: assume is the largest natural number. Then form + 1 and, since this is by definition a natural number, we have now found a natural number that is greater than . We have therefore shown that the assumption that is the highest number leads to a contradiction and thus that there cannot be such a thing as the largest natural number.

When transferring this proof into the domain of natural language, the question arises as to whether one would still want to class a string of 1,000,000,000 words as part of the language we want to describe. If we do not want this, then this proof will not work.

If we view language as a biological construct, then one has to accept the fact that it is finite. Otherwise, one is forced to assume that it is infinite, but that an infinitely large part of the biologically real object is not biologically real (Postal 2009: 111). Luuk & Luuk (2011) refer to languages as physically uncountable but finite sets of strings. They point out that a distinction must be made between the ability to imagine extending a sentence indefinitely and the ability to take a sentence from a non-countable set of strings and really extend it. We possess the first ability but not the second.

One possibility to provide arguments for the infinitude of languages is to claim that only generative grammars, which create sets of well-formed utterances, are suited to modeling language and that we need recursive rules to capture the data, which is why mental representations have a recursive procedure that generates infinite numbers of expressions (Chomsky, 1956: 115; 2002: 86–87), which then implies that languages consist of infinitely many expressions. There are two mistakes in this argument that have been pointed out by Pullum & Scholz (2010): even if one assumes generative grammars, it can still be the case that a context-sensitive grammar can still only generate a finite set even with recursive rules. Pullum & Scholz (2010: 120–121) give an interesting example from András Kornai.

The more important mistake is that it is not necessary to assume that grammars generate sets. There are three explicitly formalized alternatives of which only the third is mentioned here, namely the model-theoretic and therefore constraint-based approaches (see Chapter 14). Johnson & Postal's Arc Pair Grammar (1980), LFG in the formalization of Kaplan (1989), GPSG in the reformalization of Rogers (1997) and HPSG with the assumptions of King (1999), Pollard (1999) and Richter (2007) are examples of modeltheoretic approaches. In constraint-based theories, one would analyze an example like (32) saying that certain attitude verbs select a nominative NP and a *that* clause and that these can only occur in a certain local configuration where a particular relation holds between the elements involved. One of these relations is subject-verb agreement. In this way, one can represent expressions such as (32) and does not have to say anything about how many sentences can be embedded. This means that constraint-based theories are compatible with both answers to the question of whether there is a finite or infinite number of structures. Using competence grammars formulated in the relevant way, it is possible to develop performance models that explain why certain strings – for instance very long ones – are unacceptable (see Chapter 15).

#### **13.1.8.2 Empirical problems**

It is sometimes claimed that all natural languages are recursive and that sentences of an arbitrary length are possible in all languages (Hornstein, Nunes & Grohmann 2005: 7 for an overview, and see Pullum & Scholz (2010: Section 2) for further references). When one speaks of recursion, what is often meant are structures with self-embedding as we saw in the analysis of (32) (Fitch 2010). However, it is possible that there are languages that do not allow self-embedding. Everett (2005) claims that Pirahã is such a language (however, see Nevins, Pesetsky & Rodrigues (2009) and Everett (2009, 2012). A more recent corpus study of Pirahã can be found in Futrell, Stearns, Everett, Piantadosi & Gibson (2016).). A further example of a language without recursion, which is sometimes cited

with reference to Hale (1976), is Warlpiri. Hale's rules for the combination of a sentence with a relative clause are recursive, however (page 85). This recursion is made explicit on page 98.<sup>26</sup> Pullum & Scholz (2010: 131) discuss Hixkaryána, an Amazonian language from the Caribbean language family that is not related to Pirahã. This language does have embedding, but the embedded material has a different form to that of the matrix clause. It could be the case that these embeddings cannot be carried out indefinitely. In Hixkaryána, there is also no possibility to coordinate phrases or clauses (Derbyshire (1979: 45) cited by Pullum & Scholz (2010: 131)), which is why this possibility of forming recursive sentence embedding does not exist in this language either. Other languages without self-embedding seem to be Akkadian, Dyirbal and Proto-Uralic.

There is of course a trivial sense in which all languages are recursive: they follow a rule that says that a particular number of symbols can be combined to form another symbol.<sup>27</sup>

(33) X → X … X

In this sense, all natural languages are recursive and the combination of simple symbols to more complex ones is a basic property of language (Hockett 1960: 6). The fact that the debate about Pirahã is so fierce could go to show that this is not the kind of recursion that is meant. Also, see Fitch (2010).

It is also assumed that the combinatorial rules of Categorial Grammar hold universally. It is possible to use these rules to combine a functor with its arguments (X/Y ∗ Y = X). These rules are almost as abstract as the rules in (33). The difference is that one of the elements has to be the functor. There are also corresponding constraints in the Minimalist Program such as selectional features (see Section 4.6.4) and restrictions on the assignment of semantic roles. However, whether or not a Categorial Grammar licenses recursive structures does not depend on the very general combinatorial schemata, but rather on the lexical entries. Using the lexical entries in (34), it is only possible to analyze the four sentences in (35) and certainly not to build recursive structures. (See Chomsky (2014) for a similar discussion of a hypothetical "truncated English".)

(34) a. the: np/n


(i) Es there war was einmal once ein a Mann. man Der he hatte had sieben seven Söhne. sons 'There once was a man. He had seven sons.'

<sup>26</sup>However, he does note on page 78 that relative clauses are separated from the sentence containing the head noun by a pause. Relative clauses in Warlpiri are always peripheral, that is, they occur to the left or right of a sentence with the noun they refer to. Similar constructions can be found in German:

It could be the case that we are dealing with linking of sentences at text level and not recursion at sentence level.

<sup>27</sup>Chomsky (2005: 11) assumes that Merge combines n objects. A special instance of this is binary Merge.

	- b. The cat sees the woman.
	- c. The woman sees the woman.
	- d. The cat sees the cat.

If we expand the lexicon to include modifiers of the category n/n or conjunctions of the category (X\X)/X, then we arrive at a recursive grammar. For example, if we add the two lexical items in (36), the grammar licenses sentences like (37):

	- b. fat: n/n
	- b. The woman sees the ugly fat cat.

The grammar allows for the combination of arbitrarily many instances of *fat* (or any other adjectives in the lexicon) with nouns, since the result of combining an n/n with an n is an n. There is no upper limit on such combinations.

Concerning Pirahã, Everett (2012: 560) stated: "The upper limit of a Piraha sentence is a lexical frame with modifiers—the verb, its arguments, and one modifier for each of these. And up to two (one at each edge of the sentence) additional sentence-level or verb-level prepositional adjuncts" and "there can at most be one modifier per word. You cannot say in Pirahã 'many big dirty Brazil-nuts'. You would need to say 'There are big Brazil-nuts. There are many. They are dirty'.". The restriction to have just one modifier per noun can be incorporated easily into the lexical items for nominal modifiers. One has to assume a feature that distinguishes words from non-words. The result of combining two linguistic objects would be word− and words would be word+. The lexical item for modifiers like *big* would be as in (38):

(38) *big* with Pirahã-like restriction on word modification: n/n[word+]

The combination of *big* and *Brazil-nuts* is n[word−]. Since this is incompatible with n[word+] and since all noun modifiers require an n[word+], no further modification is possible. So although the combination rule of Categorial Grammar would allow the creation of structures of unbounded complexity, the Pirahã lexicon rules out the respective combinations. Under such a view the issue could be regarded as settled: all langugaes are assumed to combine linguistics objects. The combinations are licensed by the combinatorial rules of Categorial Grammar, by abstract rules like in HPSG or by their respective equivalents in Minimalism (Merge). Since these rules can apply to their own output they are recursive.

Concluding this subsection, it can be said that the existence of languages like Pirahã are not problematic for the assumption that all languages use rules to combine functors with arguments. However, it is problematic for claims stating that all languages allow for the creation of sentences with unbounded length and that recursive structures (NPs containing NPs, Ss containing Ss, …) can be found in all languages.

Fitch, Hauser & Chomsky (2005: 203) note that the existence of languages that do not license recursive structures is not a problem for UG-based theories as not all the possibilities in UG have to be utilized by an individual language. Similar claims were made with respect to part of speech and other morpho-syntactic properties. It was argued that UG is a toolbox and languages can choose which building blocks they use. As Evans & Levinson (2009a: 436, 443) and Tomasello (2009: 471) noticed, the toolbox approach is problematic as one can posit any number of properties belonging to UG and then decide on a language by language basis whether they play a role or not. An extreme variant of this approach would be that grammars of all languages become part of UG (perhaps with different symbols such as NPSpanish, NPGerman). This variant of a UG-based theory of the human capacity for language would be truly unfalsifiable

In the first edition of this book (Müller 2016), I followed the view of Evans & Klein and Tomasello, but I want to revise my view here. The criticism applies to things like part of speech (Section 13.1.7) since it is true that the claim that 400 and more parts of speech are part of our genetic endowment Cinque & Rizzi (2010: 55, 57) cannot really be falsified, but the situation is different here. All grammar formalisms that were covered in this book are capable of analyzing recursive structures (see Section 18.3). If Pirahã lacks recursive structures one could use a grammar formalism with lower generative capacity (see Chapter 17 on generative capacity) to model the grammar of Pirahã. Parts of the development of theoretical linguistics were driven by the desire to find formalisms of the right computational complexity to describe human languages. GPSG (with certain assumptions) was equivalent to context-sensitive grammars. When Shieber (1985) and Culy (1985) discovered data from Swiss German and Bambara it was clear that context free grammars are not sufficiently powerful to describe all known languages. Hence, GPSG was not powerful enough and researchers moved on to more powerful formalisms like HPSG. Other frameworks like CCG, TAG, and Minimalist Grammars were shown to be powerful enough to handle so-called *mildly context-sensitive grammars*, which are needed for Swiss German and Bambara. Now, we are working on languages like English using grammar formalisms that can deal with mildly context-sensitive grammars even though English may be context free (Pullum & Rawlins 2007). This is similar to the situation with Pirahã: even though English does not have cross-serial dependencies like Swiss German, linguists use tools that could license them. Even though Pirahã does not have recursive structures, linguists use tools that could license them. In general there are two possibilities: one can have very general combinatory rules and rich specifications in the lexicon or one can have very specific combinatory rules and less information in the lexicon.<sup>28</sup> If one assumes that the basic combinatory rules are abstract (Minimalism, HPSG, TAG, CG), the difference between Pirahã and English is represented in the lexicon only. Pirahã uses the combinatory potential differently. In this sense, Chomsky (2014) is right

<sup>28</sup>Personally, I argue for a middle way: Structures like Jackendoff's N-P-N constructions (2008) are analyzed by concrete phrasal constructions. Verb-argument combinations of the kind discussed in the first part of this book are analyzed with abstract combinatory schemata. See Müller (2013c) and Chapter 21.

#### 13 The innateness of linguistic knowledge

in saying that the existence of Pirahã is irrelevant to the discussion of what languages can do. Chomsky also notes that Pirahã people are able to learn other languages that have recursive structures. So in principle, they can understand and produce more complex structures in much the same way as children of English parents are able to learn Swiss German.

So the claim that all languages are infinite and make use of self-embedding recursive structures is probably falsified by languages like Pirahã, but using recursive rules for the description of all languages is probably a good decision. But even if we assume that recursive rules play a role in the analysis of all natural languages, would this mean that the respective rules and capacities are part of our genetic, language-specific endowment? This question is dealt with in the next subsection.

#### **13.1.8.3 Recursion in other areas of cognition**

There are also phenomena in domains outside of language that can be described with recursive rules: Hauser, Chomsky & Fitch (2002: 1571) mention navigation, family relations and counting systems.<sup>29</sup> One could perhaps argue that the relevant abilities are acquired late and that higher mathematics is a matter of individual accomplishments that do not have anything to do with the cognitive capacities of the majority, but even children at the age of 3 years and 9 months are already able to produce recursive structures: In 2008, there were newspaper reports about an indigenous Brazilian tribe that was photographed from a plane. I showed this picture to my son and told him that Native Americans shot at the plane with a bow and arrow. He then asked me what kind of plane it was. I told him that you cannot see that because the people who took the photograph were sitting in the plane. He then answered that you would then need another plane if you wanted to take a photo that contained both the plane and the Native Americans. He was pleased with his idea and said "And then another one. And then another one. One after the other". He was therefore very much able to imagine the consequence of embeddings.

Culicover & Jackendoff (2005: 113–114) discuss visual perception and music as recursive systems that are independent of language. Jackendoff (2011) extends the discussion of visual perception and music and adds the domains of planning (with the example of making coffee) and wordless comic strips. Chomsky (2007: 7–8) claims that examples from visual perception are irrelevant but then admits that the ability to build up recursive structures could belong to general cognitive abilities (p. 8). He still attributes this ability to UG. He views UG as a subset of the Faculty of Language, that is, as a subset of non domain-specific abilities (Faculty of Language in the Broad Sense = FLB) and the

<sup>29</sup>Pinker & Jackendoff (2005: 230) note, however, that navigation differs from the kind of recursive system described by Chomsky and that recursion is not part of counting systems in all cultures. They assume that those cultures that have developed infinite counting systems could do this because of their linguistic capabilities. This is also assumed by Fitch, Hauser & Chomsky (2005: 203). The latter authors claim that all forms of recursion in other domains depend on language. For more on this point, see Chomsky (2007: 7– 8). Luuk & Luuk (2011) note that natural numbers are defined recursively, but the mathematical definition does not necessarily play a role for the kinds of arithmetic operations carried out by humans.

domain-specific abilities (Faculty of Language in the Narrow Sense = FLN) required for language.

### **13.1.9 Summary**

In sum, we can say that there are no linguistic universals for which there is a consensus that one has to assume domain-specific innate knowledge to explain them. At the 2008 meeting of the *Deutsche Gesellschaft für Sprachwissenschaft*, Wolfgang Klein promised e 100 to anyone who could name a non-trivial property that all languages share (see Klein 2009). This begs the question of what is meant by 'trivial'. It seems clear that all languages share predicate-argument structures and dependency relations in some sense (Hudson 2010b; Longobardi & Roberts 2010: 2701) and, all languages have complex expressions whose meaning can be determined compositionally (Manfred Krifka was promised 20 e for coming up with compositionality). However, as has been noted at various points, universality by no means implies innateness (Bates 1984: 189; Newmeyer 2005: 205): Newmeyer gives the example that words for sun and moon probably exist in all languages. This has to do with the fact that these celestial bodies play an important role in everyone's lives and thus one needs words to refer to them. It cannot be concluded from this that the corresponding concepts have to be innate. Similarly, a word that is used to express a relation between two objects (e.g., *catch*) has to be connected to the words describing both of these objects (*I*, *elephant*) in a transparent way. However, this does not necessarily entail that this property of language is innate.

Even if we can find structural properties shared by all languages, this is still not proof of innate linguistic knowledge, as these similarities could be traced back to other factors. It is argued that all languages must be made in such a way as to be acquirable with the paucity of resource available to small children (Hurford 2002: Section 10.7.2; Behrens 2009: 433). It follows from this that, in the relevant phases of its development, our brain is a constraining factor. Languages have to fit into our brains and since our brains are similar, languages are also similar in certain respects (see Kluender 1992: 251).

# **13.2 Speed of language acquisition**

It is often argued that children learn language extraordinarily quickly and this can only be explained by assuming that they already possess knowledge about language that does not have to be acquired (e.g., Chomsky 1976b: 144; Hornstein 2013: 395). In order for this argument to hold up to closer scrutiny, it must be demonstrated that other areas of knowledge with a comparable degree of complexity require longer to acquire (Sampson 1989: 214–218). This has not yet been shown. Language acquisition spans several years and it is not possible to simply state that language is acquired following *brief exposure*. Chomsky compares languages to physics and points out that it is considerably more difficult for us to acquire knowledge about physics. Sampson (1989: 215) notes, however, that the knowledge about physics one acquires at school or university is not a basis for comparison and one should instead consider the acquisition of everyday knowledge

about the physical world around us. For example, the kind of knowledge we need when we want to pour liquids into a container, skip with a skipping rope or the knowledge we have about the ballistic properties of objects. The complexity in comparing these domains of knowledge in order to be able to make claims about language acquisition may turn out to be far from trivial. For an in-depth discussion of this aspect, see Sampson (1989: 214–218). Müller & Riemer (1998: 1) point out that children at the age of six can understand 23,700 words and use over 5000. It follows from this that, in the space of four and a half years, they learn on average 14 new words every day. This is indeed an impressive feat, but cannot be used as an argument for innate linguistic knowledge as all theories of acquisition assume that words have to be learned from data rather than being predetermined by a genetically-determined Universal Grammar. In any case the assumption of genetic encoding would be highly implausible for newly created words such as *fax*, *iPod*, *e-mail*, *Tamagotchi*.

Furthermore, the claim that first language acquisition is effortless and rapid when compared to second language acquisition is a myth as has been shown by estimations by Klein (1986: 9): if we assume that children hear linguistic utterances for five hours a day (as a conservative estimate), then in the first five years of their lives, they have 9100 hours of linguistic training. But at the age of five, they have still not acquired all complex constructions. In comparison, second-language learners, assuming the necessary motivation, can learn the grammar of a language rather well in a six-week crash course with twelve hours a day (500 hours in total).

# **13.3 Critical period for acquisition**

Among ducks, there is a critical phase in which their behavior towards parent figures is influenced significantly. Normally, baby ducks follow their mother. If, however, a human is present rather than the mother during a particular time span, the ducks will follow the human. After the critical period, this influence on their behavior can no longer be identified (Lorenz 1970). This kind of critical period can also be identified in other animals and in other areas of cognition, for example the acquisition of visual abilities among primates. Certain abilities are acquired in a given time frame, whereby the presence of the relevant input is important for determining the start of this time frame.

Lenneberg (1964) claims that language acquisition is only possible up to the age of twelve and concludes from the fact that children can learn language much better than adults that this is also due to a critical period and that language acquisition must have similar properties to the behavior of ducks and hence, the predisposition for language acquisition must be innate (Lenneberg 1967: Chapter 4).

The assumptions about the length of the critical period for language acquisition vary considerably. It is possible to find suggestions for 5, 6, 12 and even 15 years (Hakuta et al. 2003: 31). Meisel (2013: 79) talks about "different points of development between 4 and 16 years". An alternative assumption to a critical period would be to assume that the ability to acquire languages decreases continuously over time. Johnson & Newport (1989) tried to determine a critical period for second-language acquisition and they claim

that a second language is learned significantly worse from the age of 15. Elman, Bates, Johnson, Karmiloff-Smith, Parisi & Plunkett (1996: 187–188) have, however, pointed out that there is a different curve for Johnson and Newport's data that fits the individual data better. The alternative curve shows no abrupt change but rather a steady decrease in the ability to learn language and therefore offers no proof of an effect created by a critical period.

Hakuta, Bialystok & Wiley (2003) evaluate data from a questionnaire of 2,016,317 Spanish speakers and 324,444 speakers of Mandarin Chinese that immigrated to the United States. They investigated which correlations there were between age, the point at immigration, the general level of education of the speakers and the level of English they acquired. They could not identify a critical point in time after which language acquisition was severely restricted. Instead, there is a steady decline in the ability to learn as age increases. This can also be observed in other domains: for example, learning to drive at an older age is much harder.

Summing up, it seems to be relatively clear that a critical period cannot be proven to exist for second-language acquisition. Sometimes, it is assumed anyway that secondlanguage acquisition is not driven by an innate UG, but is in fact a learning process that accesses knowledge already acquired during the critical period (Lenneberg 1967: 176). One would therefore have to show that there is a critical period for first-language acquisition. This is, however, not straightforward as, for ethical reasons, one cannot experimentally manipulate the point at which the input is available. We cannot, say, take 20 children and let them grow up without linguistic input to the age of 3, 4, 5, 6, … or 15 and then compare the results. This kind of research is dependent on thankfully very rare cases of neglect. For example, Curtiss (1977) studied a girl called Genie. At the time, Genie was 13 years old and had grown up in isolation. She is a so-called feral child. As Curtiss showed, she was no longer able to learn certain linguistic rules. For an objective comparison, one would need other test subjects that had not grown up in complete isolation and in inhumane conditions. The only possibility of gaining relevant experimental data is to study deaf subjects that did not receive any input from a sign language up to a certain age. Johnson & Newport (1989: 63) carried out relevant experiments with learners of American Sign Language. It was also shown here that there is a linear decline in the ability to learn, however nothing like a sudden drop after a certain age or even a complete loss of the ability to acquire language.

# **13.4 Lack of acquisition among non-human primates**

The fact that non-human primates cannot learn natural language is viewed as evidence for the genetic determination of our linguistic ability. All scientists agree on the fact that there are genetically-determined differences between humans and primates and that these are relevant for linguistic ability. Friederici (2009) offers an overview of the literature that claims that in chimpanzees and macaques (and small children), the connections between parts of the brain are not as developed as in adult humans. The connected regions of the brain are together responsible for the processing of lexical-semantic knowl-

#### 13 The innateness of linguistic knowledge

edge and could constitute an important prerequisite for the development of language (p. 179).

The question is, however, whether we differ from other primates in having special cognitive capabilities that are specific to language or whether our capability to acquire languages is due to domain-general differences in cognition. Fanselow (1992b: Section 2) speaks of a human-specific formal competence that does not necessarily have to be specific to language, however. Similarly, Chomsky (2007: 7–8) has considered whether Merge (the only structure-building operation, in his opinion), does not belong to language-specific innate abilities, but rather to general human-specific competence (see, however, Section 13.1.8, in particular footnote 29).

One can ascertain that non-human primates do not understand particular pointing gestures. Humans like to imitate things. Other primates also imitate, however, not for social reasons (Tomasello 2006b: 9–10). According to Tomasello et al. (2005: 676), only humans have the ability and motivation to carry out coordinated activities with common goals and socially-coordinated action plans. Primates do understand intentional actions, however, only humans act with a common goal in mind (*shared intentionality*). Only humans use and understand hand gestures (Tomasello et al. 2005: 685, 724, 726). Language is collaborative to a high degree: symbols are used to refer to objects and sometimes also to the speaker or hearer. In order to be able to use this kind of communication system, one has to be able to put oneself in the shoes of the interlocutor and develop common expectations and goals (Tomasello et al. 2005: 683). Non-human primates could thus lack the social and cognitive prerequisites for language, that is, the difference between humans and other primates does not have to be explained by innate linguistic knowledge (Tomasello 2003: Section 8.1.2; Tomasello et al. 2005).

# **13.5 Creole and sign languages**

When speakers that do not share a common language wish to communicate with each other, they develop so-called pidgin languages. These are languages that use parts of the vocabularies of the languages involved but have a very rudimentary grammar. It has been noted that children of pidgin speakers regularize these languages. The next generation of speakers creates a new language with an independent grammar. These languages are referred to as *creole languages*. One hypothesis is that the form of languages that develop from creolization is restricted by an innate UG (Bickerton 1984b). It is assumed that the parameter setting of creole languages corresponds to the default values of parameters (Bickerton 1984a: 217; 1984b: 178), that is, parameters already have values at birth and these correspond to the values that creole languages have. These default values would have to be modified when learning other languages.<sup>30</sup> Bickerton claims that creole languages contain elements that language learners could not have acquired from the input, that is from the pidgin languages. His argumentation is a variant of the classic Poverty of the Stimulus Argument that will be discussed in more detail in Section 13.8.

<sup>30</sup>For problems that can arise from the assumption of defaults values, see Meisel (1995: 17). Bickerton (1997: 56, fn. 13) distances himself from the claim that creole languages have the default values of parameters.

Bickerton's claims have been criticized as it cannot be verified whether children had input in the individual languages of the adults (Samarin 1984: 207; Seuren 1984: 209). All that can be said considering this lack of evidence is that there are a number of demographic facts that suggest that this was the case for at least some creole languages (Arends 2008). This means that children did not only have the strings from the pidgin languages as an input but also sentences from the individual languages spoken by parents and others around them. Many creolists assume that adults contribute specific grammatical forms to the emerging language. For example, in the case of Hawaiian Creole English one can observe that there are influences from the mother tongues of the speakers involved: Japanese speakers use SOV order as well as SVO and Philippinos use VOS order as well as SVO order. In total, there is quite a lot of variation in the language that can be traced back to the various native languages of the individual speakers.

It is also possible to explain the effects observed for creolization without the assumption of innate language-specific knowledge: the fact that children regularize language can be attributed to a phenomenon independent of language. In experiments, participants were shown two light bulbs and the test subjects had to predict which of the light bulbs would be turned on next. If one of the bulbs was switched on 70% of the time, the participants also picked this one 70% of the time (although they would have actually had a higher success rate if they had always chosen the bulb turned on with 70% probability). This behavior is known as *Probability Matching*. If we add another light bulb to this scenario and then turn this lamp on in 70% of cases and the other two each 15% of the time, then participants choose the more frequently lit one 80–90% of the time, that is, they regularize in the direction of the most frequent occurrence (Gardner 1957, Weir 1964).

Children regularize more than adults (Hudson & Newport 1999, Hudson Kam & Newport 2005), a fact that can be traced back to their limited brain capacity ("less is more" hypothesis, Newport 1990, Elman 1993).

Like creolization, a similar situation can be found in certain social contexts with the acquisition of sign language: Singleton & Newport (2004) have shown that a child (Simon) that learned American Sign Language (ASL) makes considerably less mistakes than his parents. The parents first learned ASL at the age of 15 or 16 and performed particular obligatory movements only 70% of the time. Simon made these movements 90% of the time. He regularized the input from his parents, whereby the consistent use of form-meaning pairs plays an important role, that is, he does not simply use Probability Matching, but learns selectively. Singleton & Newport (2004: 401) suspect that these kinds of regularizations also play a role for the emergence of creole and sign languages. However, the relevant statistical data that one would need to confirm this hypothesis are not available.

# **13.6 Localization in special parts of the brain**

By measuring brain activity during speech production/processing and also by investigating patients with brain damage, one can identify special parts of the brain (Broca's area and Wernicke's area) that play an important role for language production and pro-

#### 13 The innateness of linguistic knowledge

cessing (see Friederici (2009) for a current overview). Chomsky talks about there being a center of language and even calls this (metaphorically) an *organ* (Chomsky 1977: 164; Chomsky 2005: 1; Chomsky 2008: 133). This localization was seen as evidence for the innate basis for our linguistic knowledge (see also Pinker 1994: 297–314).

However, it is the case that if these parts are damaged, other areas of the brain can take over the relevant functions. If the damage occurs in early childhood, language can also be learned without these special areas of the brain (for sources, see Dąbrowska 2004: Section 4.1).

Apart from that, it can also be observed that a particular area of the brain is activated when reading. If the conclusion about the localization of processing in a particular part of the brain leading to the innateness of linguistic knowledge were valid, then the activation of certain areas of the brain during reading should also lead us to conclude that the ability to read is innate (Elman et al. 1996: 242; Bishop 2002: 57). This is, however, not assumed (see also Fitch, Hauser & Chomsky 2005: 196).

It should also be noted that language processing affects several areas of the brain and not just Broca's and Wernicke's areas (Fisher & Marcus 2005: 11; Friederici 2009). On the other hand, Broca's and Wernicke's areas are also active during non-linguistic tasks such as imitation, motoric coordination and processing of music (Maess et al. 2001). For an overview and further sources, see Fisher & Marcus (2005).

Musso et al. (2003) investigated brain activity during second-language acquisition. They gave German native speakers data from Italian and Japanese and noticed that there was activation in Broca's area. They then compared this to artificial languages that used Italian and Japanese words but did not correspond to the principles of Universal Grammar as assumed by the authors. An example of the processes assumed in their artificial language is the formation of questions by reversing of word order as shown in (39).

	- b. Statement a is this?

The authors then observed that different areas of the brain were activated when learning this artificial language. This is an interesting result, but does not show that we have innate linguistic knowledge. It only shows that the areas that are active when processing our native languages are also active when we learn other languages and that playing around with words such as reversing the order of words in a sentence affects other areas of the brain.

A detailed discussion of localization of languages in particular parts of the brain can be found in Dąbrowska (2004: Chapter 4).

# **13.7 Differences between language and general cognition**

Researchers who believe that there is no such thing as innate linguistic knowledge assume that language can be acquired with general cognitive means. If it can be shown that humans with severely impaired cognition can still acquire normal linguistic abilities

or that there are people of normal intelligence whose linguistic ability is restricted, then one can show that language and general cognition are independent.

### **13.7.1 Williams Syndrome**

There are people with a relatively low IQ, who can nevertheless produce grammatical utterances. Among these are people with Williams Syndrome (see Bellugi, Lichtenberger, Jones, Lai & George (2000) for a discussion of the abilities of people with Williams Syndrome). Yamada (1981) takes the existence of such cases as evidence for a separate module of grammar, independent of the remaining intelligence.

IQ is determined by dividing a score in an intelligence test (the mental age) by chronological age. The teenagers that were studied all had a mental age corresponding to that of a four to six year-old child. Yet children at this age already boast impressive linguistic ability that comes close to that of adults in many respects. Gosch, Städing & Pankau (1994: 295) have shown that children with Williams Syndrome do show a linguistic deficit and that their language ability corresponds to what would be expected from their mental age. For problems of sufferers of Williams Syndrome in the area of morphosyntax, see Karmiloff-Smith et al. (1997). The discussion about Williams Syndrome is summarized nicely in Karmiloff-Smith (1998).

### **13.7.2 KE family with FoxP2 mutation**

There is a British family – the so-called KE family – that has problems with language. The members of this family who suffer from these linguistic problems have a genetic defect. Fisher et al. (1998) and Lai et al. (2001) discovered that this is due to a mutation of the FoxP2 gene (FoxP2 stands for *Forkhead-Box P2*). Gopnik & Cargo (1991) conclude from the fact that deficits in the realm of morphology are inherited with genetic defects that there must be a gene that is responsible for a particular module of grammar (morphology). Vargha-Khadem et al. (1995: 930) have demonstrated, however, that the KE family did not just have problems with morphosyntax: the affected family members have intellectual and linguistic problems together with motoric problems with facial muscles. Due to the considerably restricted motion in their facial muscles, it would make sense to assume that their linguistic difficulties also stem from motory problems (Tomasello 2003: 285). The linguistic problems in the KE family are not just limited to production problems, however, but also comprehension problems (Bishop 2002: 58). Nevertheless, one cannot associate linguistic deficiencies directly with FoxP2 as there are a number of other abilities that are affected by the FoxP2 mutation: as well as hindering pronunciation, morphology and syntax, it also has an effect on non-verbal IQ and motory problems with the facial muscles, dealing with non-linguistic tasks, too (Vargha-Khadem et al. 1995).

Furthermore, FoxP2 also occurs in other body tissues: it is also responsible for the development of the lungs, the heart, the intestine and various regions of the brain (Marcus & Fisher 2003). Marcus & Fisher (2003: 260–261) point out that FoxP2 is probably not

directly responsible for the development of organs or areas of organs but rather regulates a cascade of different genes. FoxP2 can therefore not be referred to as the language gene, it is just a gene that interacts with other genes in complex ways. It is, among other things, important for our language ability, however, in the same way that it does not make sense to call FoxP2 a language gene, nobody would connect a hereditary muscle disorder with a 'walking gene' just because this myopathy prevents upright walking (Bishop 2002: 58). A similar argument can be found in Karmiloff-Smith (1998: 392): there is a genetic defect that leads some people to begin to lose their hearing from the age of ten and become completely deaf by age thirty. This genetic defect causes changes in the hairs inside the ear that one requires for hearing. In this case, one would also not want to talk about a 'hearing gene'.

Fitch, Hauser & Chomsky (2005: 190) are also of the opinion that FoxP2 cannot be responsible for linguistic knowledge. For an overview of this topic, see Bishop (2002) and Dąbrowska (2004: Section 6.4.2.2) and for genetic questions in general, see Fisher & Marcus (2005).

# **13.8 Poverty of the Stimulus**

An important argument for the innateness of the linguistic knowledge is the so-called Poverty of the Stimulus Argument (PSA) (Chomsky 1980: 34). Different versions of it can be found in the literature and have been carefully discussed by Pullum & Scholz (2002). After discussing these variants, they summarize the logical structure of the argument as follows (p. 18):

	- b. If children learn their first language by data-driven learning, then they could not acquire anything for which they did not have the necessary evidence (the definition of data-driven learning)
	- c. However, children do in fact learn things that they do not seem to have decisive evidence for (empirical prerequisite)
	- d. Therefore, children do not learn their first language by data-driven learning. (*modus tollens* of b and c)
	- e. Conclusion: children learn language through a learning process supported by innate knowledge. (disjunctive syllogism of a and d)

Pullum and Scholz then discuss four phenomena that have been claimed to constitute evidence for there being innate linguistic knowledge. These are plurals as initial parts of compounds in English (Gordon 1985), sequences of auxiliaries in English (Kimball 1973), anaphoric *one* in English (Baker 1978) and the position of auxiliaries in English (Chomsky 1971: 29–33). Before I turn to these cases in Section 13.8.2, I will discuss a variant of the PSA that refers to the formal properties of phrase structure grammars.

### **13.8.1 Gold's Theorem**

In theories of formal languages, a language is viewed as a set containing all the expressions belonging to a particular language. This kind of set can be captured using various complex rewrite grammars. A kind of rewrite grammar – so-called context-free grammars – was presented in Chapter 2. In context-free grammars, there is always exactly one symbol on the left-hand side of the rule (a so-called non-terminal symbol) and there can be more of these on the right-hand side of the rule. On the right side there can be symbols (so-called non-terminal symbols) or words/morphemes of the language in question (so-called terminal symbols). The words in a grammar are also referred to as vocabulary (V). Part of a formal grammar is a start symbol, which is usually S. In the literature, this has been criticized since not all expressions are sentences (see Deppermann 2006: 44). It is, however, not necessary to assume this. It is possible to use Utterance as the start symbol and define rules that derive S, NP, VP or whatever else one wishes to class as an utterance from Utterance.<sup>31</sup>

Beginning with the start symbol, one can keep applying phrase structure rules in a grammar until one arrives at sequences that only contain words (terminal symbols). The set of all sequences that one can generate are the expressions that belong to the language that is licensed by the grammar. This set is a subset of all sequences of words or morphemes that can be created by arbitrary combination. The set that contains all possible sequences is referred to as V<sup>∗</sup> .

Gold 1967 has shown that in an environment E, it is not possible to solve the identification problem for any language from particular languages classes, given a finite amount of linguistic input, without additional knowledge. Gold is concerned with the identification of a language from a given class of languages. A language L counts as identified if at a given point in time t , a learner can determine that L is the language in question and does not change this hypothesis. This point in time is not determined in advance, however, identification has to take place at some point. Gold calls this *identification in the limit*. The environments are arbitrary infinite sequences of sentences ⟨ *a*<sup>1</sup> *, a*<sup>2</sup> *, a*<sup>3</sup> *, …*⟩, whereby each sentence in the language must occur at least once in this sequence. In order to show that the identification problem cannot be solved for even very simple language classes, Gold considers the class of languages that contain all possible sequences of words from the vocabulary V expect for one sequence: let V be the vocabulary and x1 , x<sup>2</sup> , x<sup>3</sup> , … the sequences of words from this vocabulary. The set of all strings from this vocabulary is V<sup>∗</sup> . For the class of languages in (41), which consist of all possible sequences of elements in V with the exception of one sequence, it is possible to state a process of how one could learn these languages from a text.

(41) L<sup>1</sup> = V<sup>∗</sup> − <sup>1</sup> , L<sup>2</sup> = V<sup>∗</sup> − <sup>2</sup> , L<sup>3</sup> = V<sup>∗</sup> − <sup>3</sup> , …

<sup>31</sup>On page 283, I discussed a description that corresponds to the S symbol in phrase structure grammars. If one omits the specification of head features in this description, then one gets a description of all complete phrases, that is, also *the man* or *now*. See also Ginzburg & Sag (2000: Section 8.1.4) for a unary branching rule that projects an utterance fragment to a sentential category incorporating the utterance context. See Nykiel & Kim (2021) for further details and references on ellipsis in HPSG.

#### 13 The innateness of linguistic knowledge

After every input, one can guess that the language is V<sup>∗</sup> − , where stands for the alphabetically first sequence with the shortest length that has not yet been seen. If the sequence in question occurs later, then this hypothesis is revised accordingly. In this way, one will eventually arrive at the correct language.

If we expand the set of languages from which we have to choose by V<sup>∗</sup> , then our learning process will no longer work since, if V<sup>∗</sup> is the target language, then the guessing will perpetually yield incorrect results. If there were a procedure capable of learning this language class, then it would have to correctly identify V<sup>∗</sup> after a certain number of inputs. Let us assume that this input is x . How can the learning procedure tell us at this point that the language we are looking for is not V<sup>∗</sup> <sup>−</sup> for ≠ ? If x causes one to guess the wrong grammar V<sup>∗</sup> , then every input that comes after that will be compatible with both the correct (V<sup>∗</sup> <sup>−</sup> ) and incorrect (V<sup>∗</sup> ) result. Since we only have positive data, no input allows us to distinguish between either of the hypotheses and provide the information that we have found a superset of the language we are looking for. Gold has shown that none of the classes of grammars assumed in the theory of formal languages (for example, regular, context-free and context-sensitive languages) can be identified after a finite amount of steps given the input of a text with example utterances. This is true for all classes of languages that contain all finite languages and at least one infinite language. The situation is different if positive and negative data are used for learning instead of text.

The conclusion that has been drawn from Gold's results is that, for language acquisition, one requires knowledge that helps to avoid particular hypotheses from the start. Pullum (2003) criticizes the use of Gold's findings as evidence for the fact that linguistic knowledge must be innate. He lists a number of assumptions that have to be made in order for Gold's results to be relevant for the acquisition of natural languages. He then shows that each of these is not uncontroversial.


Furthermore, Pullum notes that it is also possible to learn the class of context-sensitive grammars with Gold's procedure with positive input only in a finite number of steps if there is an upper bound for the number of rules, where is an arbitrary number. It is possible to make so big that the cognitive abilities of the human brain would not be able to use a grammar with more rules than this. Since it is normally assumed that natural languages can be described by context-sensitive grammars, it can therefore be shown that the syntax of natural languages in Gold's sense can be learned from texts (see also Scholz & Pullum 2002: 195–196).

Johnson (2004) adds that there is another important point that has been overlooked in the discussion about language acquisition. Gold's problem of identifiability is different from the problem of language acquisition that has played an important role in the nativism debate. In order to make the difference clear, Johnson differentiates between identifiability (in the Goldian sense) and learnability in the sense of language acquisition. Identifiability for a language class C means that there must be a function that for each environment for each language in permanently converges on hypothesis as the target language in a finite amount of time.

Johnson proposes the following as the definition of *learnability* (p. 585): *A class of natural languages is learnable iff, given almost any normal human child and almost any normal linguistic environment for any language in , the child will acquire (or something sufficiently similar to ) as a native language between the ages of one and five.* Johnson adds the caveat that this definition does not correspond to any theory of learnability in psycholinguistics, but rather it is a hint in the direction of a realistic conception of acquisition.

Johnson notes that in most interpretations of Gold's theorem, identifiability and learnability are viewed as one and the same and shows that this is not logically correct: the main difference between the two depends on the use of two quantifiers. Identifiability of *one* language from a class requires that the learner converges on in *every* environment after a finite amount of time. This time can differ greatly from environment to environment. There is not even an upper bound for the time in question. It is straightforward to construct a sequence of environments <sup>1</sup> , <sup>2</sup> , … for , so that a learner in the environment will not guess earlier than the time . Unlike identifiability, learnability means that there is a point in time after which in every normal environment, *every* normal child has converged on the correct language. This means that children acquire

their language after a particular time span. Johnson quotes Morgan (1989: 352) claiming that children learn their native language after they have heard approximately 4,280,000 sentences. If we assume that the concept of learnability has a finite upper-bound for available time, then very few language classes can be identified in the limit. Johnson has shown this as follows: let be a class of languages containing and ′ , where and ′ have some elements in common. It is possible to construct a text such that the first sentences are contained both in and in ′ . If the learner has as its working hypothesis then continue the text with sentences from ′ , if he has ′ as his hypothesis, then continue with sentences from . In each case, the learner has entertained a false hypothesis after steps. This means that identifiability is not a plausible model for language acquisition.

Aside from the fact that identifiability is psychologically unrealistic, it is not compatible with learnability (Johnson 2004: 586). For identifiability, only one learner has to be found (the function mentioned above), learnability, however, quantifies over (almost) all normal children. If one keeps all factors constant, then it is easier to show the identifiability of a language class rather than its learnability. On the one hand, identifiability quantifies universally over all environments, regardless of whether these may seem odd or of how many repetitions these may contain. Learnability, on the other hand, has (almost) universal quantification exclusively over normal environments. Therefore, learnability refers to fewer environments than identifiability, such that there are less possibilities for problematic texts that could occur as an input and render a language unlearnable. Furthermore, learnability is defined in such a way that the learner does not have to learn exactly, but rather learn something sufficiently similar to . With respect to this aspect, learnability is a weaker property of a language class than identifiability. Therefore, learnability does not follow from identifiability nor the reverse.

Finally, Gold is dealing with the acquisition of syntactic knowledge without taking semantic knowledge into consideration. However, children possess a vast amount of information from the context that they employ when acquiring a language (Tomasello et al. 2005). As pointed out by Klein (1986: 44), humans do not learn anything if they are placed in a room and sentences in Mandarin Chinese are played to them. Language is acquired in a social and cultural context.

In sum, one should note that the existence of innate linguistic knowledge cannot be derived from mathematical findings about the learnability of languages.

### **13.8.2 Four case studies**

Pullum & Scholz (2002) have investigated four prominent instances of the Poverty of the Stimulus Argument in more detail. These will be discussed in what follows. Pullum and Scholz's article appeared in a discussion volume. Arguments against their article are addressed by Scholz & Pullum (2002) in the same volume. Further PoS arguments from Chomsky (1986b) and from literature in German have been disproved by Eisenberg (1992).

### **13.8.2.1 Plurals in noun-noun compounding**

Gordon (1985) claims that compounds in English only allow irregular plurals in compounds, that is, *mice-eater* but ostensibly not *\* rats-eater*. Gordon claims that compounds with irregular plurals as first element are so rare that children could not have learned the fact that such compounds are possible purely from data.

On pages 25–26, Pullum & Scholz discuss data from English that show that regular plurals can indeed occur as the first element of a compound (*chemicals-maker*, *forms-reader*, *generics-maker*, *securities-dealer*, *drinks trolley*, *rules committee*, *publications catalogue*).<sup>32</sup> This shows that what could have allegedly not been learned from data is in fact not linguistically adequate and one therefore does not have to explain its acquisition.

#### **13.8.2.2 Position of auxiliaries**

The second study deals with the position of modal and auxiliary verbs. Kimball (1973: 73–75) discusses the data in (42) and the rule in (43) that is similar to one of the rules suggested by Chomsky (1957: 39) and is designed to capture the following data:

#### (42) a. It rains.


### (43) Aux → T(M)(have+en)(be+ing)

T stands for tense, M for a modal verb and *en* stands for the participle morpheme ( *en* in *been*/*seen*/… and -*ed* in *rained*). The brackets here indicate the optionality of the expressions. Kimball notes that it is only possible to formulate this rule if (42h) is wellformed. If this were not the case, then one would have to reorganize the material in rules such that the three cases (M)(have+en), (M)(be+ing) and (have+en)(be+ing) would be covered. Kimball assumes that children master the complex rule since they know that sentences such as (42h) are well-formed and since they know the order in which modal and auxiliary verbs must occur. Kimball assumes that children do not have positive evidence for the order in (42h) and concludes from this that the knowledge about the rule in (43) must be innate.

Pullum and Scholz note two problems with this Poverty of the Stimulus Argument: first, they have found hundreds of examples, among them some from children's stories, so that the Kimball's claim that sentences such as (42h) are "vanishingly rare" should

<sup>32</sup>Also, see Abney (1996: 7) for examples from the Wall Street Journal.

be called into question. For PSA arguments, one should at least specify how many occurrences there are allowed to be if one still wants to claim that nothing can be learned from them (Pullum & Scholz 2002: 29).

The second problem is that it does not make sense to assume that the rule in (42h) plays a role in our linguistic knowledge. Empirical findings have shown that this rule is not descriptively adequate. If the rule in (43) is not descriptively adequate, then it cannot achieve explanatory adequacy and therefore, one no longer has to explain how it can be acquired.

Instead of a rule such as (43), all theories discussed here currently assume that auxiliary or modal verbs embed a phrase, that is, one does not have an Aux node containing all auxiliary and modal verbs, but rather a structure for (42h) that looks as follows:

(44) It [may [have [been raining]]].

Here, the auxiliary or modal verb always selects the embedded phrase. The acquisition problem now looks completely different: a speaker has to learn the form of the head verb in the verbal projection selected by the auxiliary or modal verb. If this information has been learned, then it is irrelevant how complex the embedded verbal projections are: *may* can be combined with a non-finite lexical verb (42b) or a non-finite auxiliary (42c,d).

### **13.8.2.3 Reference of** *one*

The third case study investigated by Pullum and Scholz deals with the pronoun *one* in English. Baker (1978: 413–425, 327–340) claims that children cannot learn that *one* can refer to constituents larger than a single word as in (45).

	- b. The old man from France was more erudite than the young *one*.

Baker (416–417) claims that *one* can never refer to single nouns inside of NPs and supports this with examples such as (46):

(46) \* The student of chemistry was more thoroughly prepared than the one of physics.

According to Baker, learners would require negative data in order to acquire this knowledge about ungrammaticality. Since learners – following his argumentation – never have access to negative evidence, they cannot possibly have learned the relevant knowledge and must therefore already possess it.

Pullum & Scholz (2002: 33) point out that there are acceptable examples with the same structure as the examples in (46):

	- b. An advocate of Linux got into a heated discussion with one of Windows NT and the rest of the evening was nerd talk.

This means that there is nothing to learn with regard to the well-formedness of the structure in (46). Furthermore, the available data for acquiring the fact that *one* can refer to larger constituents is not as hopeless as Baker (p. 416) claims: there are examples that only allow an interpretation where *one* refers to a larger string of words. Pullum and Scholz offer examples from various corpora. They also provide examples from the CHILDES corpus, a corpus that contains communication with children (MacWhinney 1991). The following example is from a daytime TV show:

	- B: "Maybe I will, someday. But he'd have to be somebody very special. Sensitive and supportive, giving. Hey, wait a minute, where do they make guys like this?"
	- A: "I don't know. I've never seen one up close."

Here, it is clear that *one* cannot refer to *guys* since A has certainly already seen *guys*. Instead, it refers to *guys like this*, that is, men who are sensitive and supportive.

Once again, the question arises here as to how many instances a learner has to hear for it to count as evidence in the eyes of proponents of the PSA.

### **13.8.2.4 Position of auxiliaries in polar questions**

The fourth PoS argument discussed by Pullum and Scholz comes from Chomsky and pertains to the position of the auxiliary in polar interrogatives in English. As shown on page 97, it was assumed in GB theory that a polar question is derived by movement of the auxiliary from the I position to the initial position C of the sentence. In early versions of Transformational Grammar, the exact analyses were different, but the main point was that the highest auxiliary is moved to the beginning of the clause. Chomsky (1971: 29–33) discusses the sentences in (49) and claims that children know that they have to move the highest auxiliary verb even without having positive evidence for this.<sup>33</sup> If, for example, they entertained the hypothesis that one simply places the first auxiliary at the beginning of the sentence, then this hypothesis would deliver the correct result (49b) for (49a), but not for (49c) since the polar question should be (49d) and not (49e).

	- b. Is the dog in the corner hungry?
	- c. The dog that is in the corner is hungry.
	- d. Is the dog that is in the corner hungry?
	- e. \* Is the dog that in the corner is hungry?

Chomsky claims that children do not have any evidence for the fact that the hypothesis that one simply fronts the linearly first auxiliary is wrong, which is why they could pursue this hypothesis in a data-driven learning process. He even goes so far as to claim that

<sup>33</sup>Examples with auxiliary inversion are used in more recent PoS arguments too, for example in Berwick, Pietroski, Yankama & Chomsky (2011) and Chomsky (2013: 39). Work by Bod (2009b) is not discussed by the authors. For more on Bod's approach, see Section 13.8.3.

speakers of English only rarely or even never produce examples such as (49d) (Chomsky in Piattelli-Palmarini (1980: 114–115)). With the help of corpus data and plausibly constructed examples, Pullum (1996) has shown that this claim is clearly wrong. Pullum (1996) provides examples from the Wall Street Journal and Pullum & Scholz (2002) discuss the relevant examples in more detail and add to them with examples from the CHILDES corpus showing that adult speakers cannot only produce the relevant kinds of sentences, but also that these occur in the child's input.<sup>34</sup> Examples from CHILDES that disprove the hypothesis that the first auxiliary has to be fronted are given in (50):<sup>35</sup>

	- b. Where's this little boy who's full of smiles?
	- c. While you're sleeping, shall I make the breakfast?

Pullum and Scholz point out that *wh*-questions such as (50b) are also relevant if one assumes that these are derived from polar questions (see page 97 in this book) and if one wishes to show how the child can learn the structure-dependent hypothesis. This can be explained with the examples in (51): the base form from which (51a) is derived is (51b). If we were to front the first auxiliary in (51b), we would produce (51c).

	- b. the application Mark [AUX PAST] promised to fill out [AUX is] there
	- c. \* Where did the application Mark promised to fill out is?

Evidence for the fact that (51c) is not correct can, however, also be found in language addressed to children. Pullum and Scholz provide the examples in (52):<sup>37</sup>

	- b. Where's the other dolly that was in here?
	- c. Where's the other doll that goes in there?

These questions have the form *Where's NP?*, where NP contains a relative clause.

In (50c), there is another clause preceding the actual interrogative, an adjunct clause containing an auxiliary as well. This sentence therefore provides evidence for falsehood of the hypothesis that the linearly first auxiliary must be fronted (Sampson 1989: 223).

In total, there are a number of attested sentence types in the input of children that would allow them to choose between the two hypotheses. Once again, the question arises as to how much evidence should be viewed as sufficient.

<sup>34</sup>For more on this point, see Sampson (1989: 223). Sampson cites part of a poem by William Blake, that is studied in English schools, as well as a children's encyclopedia. These examples surely do not play a role in acquisition of auxiliary position since this order is learned at the age of 3;2, that is, it has already been learned by the time children reach school age.

<sup>35</sup>See Lewis & Elman (2002). Researchers on language acquisition agree that the frequency of this kind of examples in communication with children is in fact very low. See Ambridge et al. (2008: 223). <sup>36</sup>From the transcription of a TV program in the CHILDES corpus.

<sup>37</sup>These sentences are taken from NINA05.CHA in DATABASE/ENG/SUPPES/.

Pullum und Scholz's article has been criticized by Lasnik & Uriagereka (2002) and Legate & Yang (2002). Lasnik and Uriagereka argue that the acquisition problem is much bigger than presented by Pullum and Scholz since a learner without any knowledge about the language he was going to acquire could not just have the hypothesis in (53) that were discussed already but also the additional hypotheses in (54):

	- b. Place the first auxiliary in matrix-Infl at the front of the clause.
	- b. Place any finite auxiliary at the front of the clause.

Both hypotheses in (54) would be permitted by the sentences in (55):

	- b. Is the dog that is in the corner hungry?

They would, however, also allow sentences such as (56):

(56) \* Is the dog that in the corner is hungry?

The question that must now be addressed is why all hypotheses that allow (56) should be discarded since the learners do not have any information in their natural-linguistic input about the fact that (56) is not possible. They are lacking negative evidence. If (55b) is present as positive evidence, then this by no means implies that the hypothesis in (53b) has to be the correct one. Lasnik and Uriagereka present the following hypotheses that would also be compatible with (55b):

	- b. Place the first auxiliary in initial position (that follows the first complete constituent).
	- c. Place the first auxiliary in initial position (that follows the first parsed semantic unit).

These hypotheses do not hold for sentences such as (58) that contain a conjunction:

(58) Will those who are coming and those who are not coming raise their hands?

The hypotheses in (57) would also allow for sentences such as (59):

(59) \* Are those who are coming and those who not coming will raise their hands?

Speakers hearing sentences such as (58) can reject the hypotheses (57) and thereby rule out (59), however, it is still possible to think of analogous implausible hypotheses that are compatible with all data previously discussed.

Legate & Yang (2002) take up the challenge of Pullum and Scholz and explicitly state how many occurrences one needs to acquire a particular phenomenon. They write the following:

Suppose we have two independent problems of acquisition, P<sup>1</sup> and P<sup>2</sup> , each of which involves a binary decision. For P<sup>1</sup> , let F<sup>1</sup> be the frequency of the data that can settle P<sup>1</sup> one way or another, and for P<sup>2</sup> , F<sup>2</sup> . Suppose further that children successfully acquire P<sup>1</sup> and P<sup>2</sup> at roughly the same developmental stage. Then, under any theory that makes quantitative predictions of language development, we expect F1 and F<sup>2</sup> to be roughly the same. Conversely, if F<sup>1</sup> and F<sup>2</sup> turn out significantly different, then P<sup>1</sup> and P<sup>2</sup> must represent qualitatively different learning problems.

Now let P<sup>1</sup> be the auxiliary inversion problem. The two choices are the structuredependent hypothesis (3b-i) and the first auxiliary hypothesis (3a-i). (Legate & Yang 2002: 155)

The position of auxiliaries in English is learned by children at the age of 3;2. According to Legate and Yang, another acquisition phenomenon that is learned at the age of 3;2 is needed for comparison. The authors focus on subject drop38, that is learned at 36 months (two months earlier than auxiliary inversion). According to the authors, acquisition problems involve a binary decision: in the first case, one has to choose between the two hypotheses in (53). In the second case, the learner has to determine whether a language uses overt subjects. The authors assume that the use of expletives such as *there* serves as evidence for learners that the language they are learning is not one with optional subjects. They then count the sentences in the CHILDES corpus that contain *there*-subjects and estimate F<sup>2</sup> at 1.2 % of the sentences heard by the learner. Since, in their opinion, we are dealing with equally difficult phenomena here, sentences such as (49d) and (52) should constitute 1.2 % of the input in order for auxiliary inversion to be learnable.

The authors then searched in the Nina and Adam corpora (both part of CHILDES) and note that 0.068 to 0.045 % of utterances have the form of (52) and none have the form of (49d). They conclude that this number is not sufficient as positive evidence.

Legate and Yang are right in pointing out that Pullum and Scholz's data from the Wall Street Journal are not necessarily relevant for language acquisition and also in pointing out that examples with complex subject noun phrases do not occur in the data or at least to a negligible degree. There are, however, three serious problems with their argumentation: first, there is no correlation between the occurrence of expletive subjects and the property of being a pro-drop language: Galician (Raposo & Uriagereka 1990: Section 2.5) is a pro-drop language with subject expletive pronouns, in Italian there is an existential expletive *ci*, <sup>39</sup> even though Italian counts as a pro-drop language, Franks (1995) lists Upper and Lower Sorbian as pro-drop languages that have expletives in subject position. Since therefore expletive pronouns have nothing to do with the pro-drop parameter, their frequency is irrelevant for the acquisition of a parameter value. If there were a correlation between the possibility of omitting subjects and the occurrence of subject expletives, then Norwegian and Danish children should learn that there has to be a

<sup>38</sup>This phenomenon is also called *pro-drop*. For a detailed discussion of the pro-drop parameter see Section 16.1.

<sup>39</sup>However, *ci* is not treated as an expletive by all authors. See Remberger (2009) for an overview.

subject in their languages earlier than children learning English since expletives occur a higher percentage of the time in Danish and Norwegian (Scholz & Pullum 2002: 220). In Danish, the constructions corresponding to *there*-constructions in English are twice as frequent. It is still unclear whether there are actually differences in rate of acquisition (Pullum 2009: 246).

Second, in constructing their Poverty of the Stimulus argument, Legate and Yang assume that there is innate linguistic knowledge (the pro-drop parameter). Therefore their argument is circular since it is supposed to show that the assumption of innate linguistic knowledge is indispensable (Scholz & Pullum 2002: 220).

The third problem in Legate and Yang's argumentation is that they assume that a transformational analysis is the only possibility. This becomes clear from the following citation (Legate & Yang 2002: 153):

The correct operation for question formation is, of course, structure dependent: it involves parsing the sentence into structurally organized phrases, and fronting the auxiliary that follows the subject NP, which can be arbitrarily long:

	- b. Has [the man that is reading a book] e eaten supper?

The analysis put forward by Chomsky (see page 97) is a transformation-based one, that is, a learner has to learn exactly what Legate and Yang describe: the auxiliary must move in front of the subject noun phrase. There are, however, alternative analyses that do not require transformations or equivalent mechanisms. If our linguistic knowledge does not contain any information about transformations, then their claim about what has to be learned is wrong. For example, one can assume, as in Categorial Grammar, that auxiliaries form a word class with particular distributional properties. One possible placement for them is initial positions as observed in questions, the alternative is after the subject (Villavicencio 2002: 104). There would then be the need to acquire information about whether the subject is realized to the left or to the right of its head. As an alternative to this lexicon-based analysis, one could pursue a Construction Grammar (Fillmore 1988: 44; 1999; Kay & Fillmore 1999: 18), Cognitive Grammar (Dąbrowska 2004: Chapter 9), or HPSG (Ginzburg & Sag 2000, Sag et al. 2020) approach. In these frameworks, there are simply two<sup>40</sup> schemata for the two sequences that assign different meanings according to the order of verb and subject. The acquisition problem is then that the learners have to identify the corresponding phrasal patterns in the input. They have to realize that Aux NP VP is a well-formed structure in English that has interrogative semantics. The relevant theories of acquisition in the Construction Grammar-oriented literature have been very well worked out (see Section 16.3 and 16.4). Construction-based theories of acquisition are also supported by the fact that one can see that there are frequency effects, that is, auxiliary inversion is first produced by children for just a few auxiliaries and only in later phases of development is it then extended to all auxiliaries. If speakers

<sup>40</sup>Fillmore (1999) assumes subtypes of the Subject Auxiliary Inversion Construction since this kind of inversion does not only occur in questions.

have learned that auxiliary constructions have the pattern Aux NP VP, then the coordination data provided by Lasnik and Uriagereka in (58) no longer pose a problem since, if we only assign the first conjunct to the NP in the pattern Aux NP VP, then the rest of the coordinate structure (*and those who are not coming*) remains unanalyzed and cannot be incorporated into the entire sentence. The hearer is thereby forced to revise his assumption that *will those who are coming* corresponds to the sequence Aux NP in Aux NP VP and instead to use the entire NP *those who are coming and those who are not coming*. For acquisition, it is therefore enough to simply learn the pattern Aux NP VP first for some and then eventually for all auxiliaries in English. This has also been shown by Lewis & Elman (2002), who trained a neural network exclusively with data that did not contain NPs with relative clauses in auxiliary constructions. Relative clauses were, however, present in other structures. The complexity of the training material was increased bit by bit just as is the case for the linguistic input that children receive (Elman 1993).<sup>41</sup> The neural network can predict the next symbol after a sequence of words. For sentences with interrogative word order, the predictions are correct. Even the relative pronoun in (60) is predicted despite the sequence Aux Det N Relp never occurring in the training material.

(60) Is the boy who is smoking crazy?

Furthermore, the system signals an error if the network is presented with the ungrammatical sentence (61):

(61) \* Is the boy who smoking is crazy?

A present participle is not expected after the relative pronoun, but rather a finite verb. The constructed neural network is of course not yet an adequate model of what is going on in our heads during acquisition and speech production.<sup>42</sup> The experiment shows, however, that the input that the learner receives contains rich statistical information that can be used when acquiring language. Lewis and Elman point out that the statistical information about the distribution of words in the input is not the only information that speakers have. In addition to information about distribution, they are also exposed to information about the context and can make use of phonological similarities in words.

In connection to the ungrammatical sentences in (61), it has been claimed that the fact that such sentences can never be produced shows that children already know that grammatical operations are structure-dependent and this is why they do not entertain the hypothesis that it is simply the linearly first verb that is moved (Crain & Nakayama 1987). The claim simply cannot be verified since children do not normally form the relevant complex utterances. It is therefore only possible to experimentally illicit utterances where they could make the relevant mistakes. Crain & Nakayama (1987) have carried out

<sup>41</sup>There are cultural differences. In some cultures, adults do not talk to children that have not attained full linguistic competence (Ochs 1982, Ochs & Schieffelin 1985) (also see Section 13.8.4). Children have to therefore learn the language from their environment, that is, the sentences that they hear reflect the full complexity of the language.

<sup>42</sup>See Hurford (2002: 324) and Jackendoff (2007: Section 6.2) for problems that arise for certain kinds of neural networks and Pulvermüller (2003, 2010) for an alternative architecture that does not have these problems.

such experiments. Their study has been criticized by Ambridge, Rowland & Pine (2008) since these authors could show that children do really make mistakes when fronting auxiliaries. The authors put the difference to the results of the first study by Crain and Nakayama down to unfortunate choice of auxiliary in Crain and Nakayama's study. Due to the use of the auxiliary *is*, the ungrammatical examples had pairs of words that never or only very rarely occur next to each other (*who running* in (62a)).

	- b. The boy who can run fast can jump high. → \* Can the boy who run fast can jump high?

If one uses the auxiliary *can*, this problem disappears since *who* and *run* certainly do appear together. This then leads to the children actually making mistakes that they should not have, as the incorrect utterances actually violate a constraint that is supposed to be part of innate linguistic knowledge.

Estigarribia (2009) investigated English polar questions in particular. He shows that not even half of the polar questions in children's input have the form Aux NP VP (p. 74). Instead, parents communicated with their children in a simplified form and used sentences such as:

	- b. He talking?
	- c. That taste pretty good?

Estigarribia divides the various patterns into complexity classes of the following kind: frag (*fragmentary*), spred (*subject predicate*) and aux-in (*auxiliary inversion*). (64) shows corresponding examples:


What we see is that the complexity increases from class to class. Estigarribia suggests a system of language acquisition where simpler classes are acquired before more complex ones and the latter ones develop from peripheral modifications of more simple classes (p. 76). He assumes that question forms are learned from right to left (*right to left elaboration*), that is, (64a) is learned first, then the pattern in (64b) containing a subject in addition to the material in (64a), and then in a third step, the pattern (64c) in which an additional auxiliary occurs (p. 82). In this kind of learning procedure, no auxiliary inversion is involved. This view is compatible with constraint-based analyses such as that of Ginzburg & Sag (2000). A similar approach to acquisition by Freudenthal, Pine, Aguado-Orea & Gobet (2007) will be discussed in Section 16.3.

A further interesting study has been carried out by Bod (2009b). He shows that it is possible to learn auxiliary inversion assuming trees with any kind of branching even

#### 13 The innateness of linguistic knowledge

if there is no auxiliary inversion with complex noun phrases present in the input. The procedure he uses as well as the results he gains are very interesting and will be discussed in Section 13.8.3 in more detail.

In conclusion, we can say that children do make mistakes with regard to the position of auxiliaries that they probably should not make if the relevant knowledge were innate. Information about the statistical distribution of words in the input is enough to learn the structures of complex sentences without actually having this kind of complex sentences in the input.

### **13.8.2.5 Summary**

Pullum & Scholz (2002: 19) show what an Argument from Poverty of the Stimulus (APS) would have to look like if it were constructed correctly:

	- a. ACQUIRENDUM CHARACTERIZATION: describe in detail what is alleged to be known.
	- b. LACUNA SPECIFICATION: identify a set of sentences such that if the learner had access to them, the claim of data-driven learning of the acquirendum would be supported.
	- c. INDISPENSABILITY ARGUMENT: give reason to think that if learning were data-driven, then the acquirendum could not be learned without access to sentences in the lacuna.
	- d. INACCESSIBILITY EVIDENCE: support the claim that tokens of sentences in the lacuna were not available to the learner during the acquisition process.
	- e. ACQUISITION EVIDENCE: give reason to believe that the acquirendum does in fact become known to learners during childhood.

As the four case studies have shown, there can be reasons for rejecting the acquirendum. If the acquirendum does not have to be acquired, than there is no longer any evidence for innate linguistic knowledge. The acquirendum must at least be descriptively adequate. This is an empirical question that can be answered by linguists. In three of the four PoS arguments discussed by Pullum and Scholz, there were parts which were not descriptively adequate. In previous sections, we already encountered other PoS arguments that involve claims regarding linguistic data that cannot be upheld empirically (for example, the Subjacency Principle). For the remaining points in (65), interdisciplinary work is required: the specification of the lacuna falls into the theory of formal language (the specification of a set of utterances), the argument of indispensability is a mathematical task from the realm of learning theory, the evidence for inaccessibility is an empirical question that can be approached by using corpora, and finally the evidence for acquisition is a question for experimental developmental psychologists (Pullum & Scholz 2002: 19–20).

Pullum & Scholz (2002: 46) point out an interesting paradox with regard to (65c): without results from mathematical theories of learning, one cannot achieve (65c). If one wishes to provide a valid Poverty of the Stimulus Argument, then this should automatically lead to improvements in theories of learning, that is, it is possible to learn more than was previously assumed.

# **13.8.3 Unsupervised Data-Oriented Parsing (U-DOP)**

Bod (2009b) has developed a procedure that does not require any information about word classes or relations between words contained in utterances. The only assumption that one has to make is that there is some kind of structure. The procedure consists of three steps:


This process will be explained using the sentences in (66):

	- b. The dog barks.

The trees that are assigned to these utterances only use the category symbol X since the categories for the relevant phrases are not (yet) known. In order to keep the example readable, the words themselves will not be given the category X, although one can of course do this. Figure 13.2 shows the trees for (66). In the next step, the trees are divided into subtrees. The trees in Figure 13.2 have the subtrees that can be seen in Figure 13.3.

Figure 13.2: Possible binary-branching structures for *Watch the dog* and *The dog barks*.

Figure 13.3: Subtrees for the trees in Figure 13.2

In the third step, we now have to compute the best tree for each utterance. For *The dog barks.*, there are two trees in the set of the subtrees that correspond exactly to this utterance. But it is also possible to build structures out of subtrees. There are therefore multiple derivations possible for *The dog barks.* all of which use the trees in Figure 13.3: one the one hand, trivial derivations that use the entire tree, and on the other, derivations that build trees from smaller subtrees. Figure 13.4 gives an impression of how this construction of subtrees happens. If we now want to decide which of the analyses in (67) is the best, then we have to compute the probability of each tree.

	- b. [the [dog barks]]

Figure 13.4: Analysis of *The dog barks* using subtrees from Figure 13.3

The probability of a tree is the sum of the probabilities of all its analyses. There are two analyses for (67b), which can be found in Figure 13.4. The probability of the first analysis of (67b) corresponds to the probability of choosing exactly the complete tree for [the [dog barks]] from the set of all subtrees. Since there are twelve subtrees, the probability of choosing that one is 1/12. The probability of the second analysis is the product of the probabilities of the subtrees that are combined and is therefore 1/12 × 1/12 = 1/144. The probability of the analysis in (67b) is therefore 1/12 + (1/12 × 1/12) = 13/144. One can then calculate the probability of the tree in (67a) in the same way. The only difference here is that the tree for [the dog] occurs twice in the set of subtrees. Its probability is therefore 2/12. The probability of the tree [[the dog] barks] is therefore: 1/12 + (1/12 × 2/12) = 14/144. We have thus extracted knowledge about plausible structures from the corpus. This knowledge can also be applied whenever one hears a new utterance for which there is no complete tree. It is then possible to use already known subtrees to calculate the probabilities of possible analyses of the new utterance. Bod's model can also be combined with weights: those sentences that were heard longer ago by the speaker, will receive a lower weight. One can thereby also account for the fact that children do not have all sentences that they have ever heard available simultaneously. This extension makes the UDOP model more plausible for language acquisition.

In the example above, we did not assign categories to the words. If we were to do this, then we would get the tree in Figure 13.5 on the following page as a possible subtree. These kinds of discontinuous subtrees are important if one wants to capture dependencies between elements that occur in different subtrees of a given tree. Some examples are the following sentences:

	- b. *What's* this scratch *doing* on the table?
	- c. Most software *companies* in Vietnam *are* small sized.

Figure 13.5: Discontinuous partial tree

It is then also possible to learn auxiliary inversion in English with these kinds of discontinuous trees. All one needs are tree structures for the two sentences in (69) in order to prefer the correct sentence (70a) over the incorrect one (70b).

	- b. Is the boy hungry?
	- b. \* Is the man who eating is hungry?

U-DOP can learn the structures for (69) in Figure 13.6 on the next page from the sentences in (71):

	- b. The man is hungry.
	- c. The man mumbled.
	- d. The boy is eating.

Note that these sentences do not contain any instance of the structure in (70a). With the structures learned here, it is possible to show that the shortest possible derivation for the position of the auxiliary is also the correct one: the correct order *Is the man who is eating hungry?* only requires that the fragments in Figure 13.7 on the facing page are combined, whereas the structure for \* *Is the man who eating is hungry?* requires at least four subtrees from Figure 13.6 to be combined with each other. This is shown by Figure 13.8 on page 506.

The motivation for always taking the derivation that consists of the least subparts is that one maximizes similarity to already known material.

The tree for (72) containing one auxiliary too many can also be created from Figure 13.6 with just two subtrees (with the tree [<sup>X</sup> is<sup>X</sup> X] and the entire tree for *The man who is eating is hungry*).

(72) \* Is the man who is eating is hungry?

Figure 13.6: Structures that U-DOP learned from the examples in (69) and (71)

Figure 13.7: Derivation of the correct structure for combination with an auxiliary using two subtrees from Figure 13.6

Interestingly, children do produce this kind of incorrect sentences (Crain & Nakayama 1987: 530; Ambridge, Rowland & Pine 2008). However, if we consider the probabilities of the subtrees in addition to the the number of combined subparts, we get the correct result, namely (70a) and not (72). This is due to the fact that *the man who is eating* occurs in the corpus twice, in (70a) and in (71a). Thus, the probability of *the man who is eating* is just as high as the probability of *the man who is eating is hungry* and thus derivation in Figure 13.7 is preferred over the one for (72). This works for the constructed examples here, however one can imagine that in a realistic corpus, sequences of the form *the man who is eating* are more frequent than sequences with further words since *the man who is eating* can also occur in other contexts. Bod has applied this process

Figure 13.8: Derivation of the incorrect structure for the combination with an auxiliary using two subtrees from Figure 13.6

to corpora of adult language (English, German and Chinese) as well as applying it to the Eve corpus from the CHILDES database in order to see whether analogy formation constitutes a plausible model for human acquisition of language. He was able to show that what we demonstrated for the sentences above also works for a larger corpus of naturally occurring language: although there were no examples for movement of an auxiliary across a complex NP in the Eve corpus, it is possible to learn by analogy that the auxiliary from a complex NP cannot be fronted.

It is therefore possible to learn syntactic structures from a corpus without any prior knowledge about parts of speech or abstract properties of language. The only assumption that Bod makes is that there are (binary-branching) structures. The assumption of binarity is not really necessary. But if one includes flat branching structures into the computation, the set of trees will become considerably bigger. Therefore, Bod only used binary-branching structures in his experiments. In his trees, X consists of two other X's or a word. We are therefore dealing with recursive structures. Therefore, Bod's work proposes a theory of the acquisition of syntactic structures that only requires recursion, something that is viewed by Hauser, Chomsky & Fitch (2002) as a basic property of language.

As shown in Section 13.1.8, there is evidence that recursion is not restricted to language and thus one can conclude that it is not necessary to assume innate linguistic knowledge in order to be able to learn syntactic structures from the given input.

Nevertheless, it is important to point out something here: what Bod shows is that syntactic structures can be learned. The information about the parts of speech of each word involved which are not yet included in his structures can also be derived using statistical methods (Redington et al. 1998, Clark 2000).<sup>43</sup> In all probability, the structures that can be learned correspond to structures that surface-oriented linguistic theories would also assume. However, not all aspects of the linguistic analysis are acquired. In Bod's model,

<sup>43</sup>Computational linguistic algorithms for determining parts of speech often look at an entire corpus. But children are always dealing with just a particular part of it. The corresponding learning process must then also include a curve of forgetting. See Braine (1987: 67).

only occurrences of words in structures are evaluated. Nothing is said about whether words stand in a particular regular relationship to one another or not (for example, a lexical rule connecting a passive participle and perfect participle). Furthermore, nothing is said about how the meaning of expressions arise (are they rather holistic in the sense of Construction Grammar or projected from the lexicon?). These are questions that still concern theoretical linguists (see Chapter 21) and cannot straightforwardly be derived from the statistic distribution of words and the structures computed from them (see Section 21.8.1 for more on this point).

A second comment is also needed: we have seen that statistical information can be used to derive the structure of complex linguistic expressions. This now begs the question of how this relates to Chomsky's earlier argumentation against statistical approaches (Chomsky 1957: 16). Abney (1996: Section 4.2) discusses this in detail. The problem with his earlier argumentation is that Chomsky referred to Markov models. These are statistical versions of finite automatons. Finite automatons can only describe type 3 languages and are therefore not appropriate for analyzing natural language. However, Chomsky's criticism cannot be applied to statistical methods in general.

## **13.8.4 Negative evidence**

In a number of works that assume innate linguistic knowledge, it is claimed that children do not have access to negative evidence, that is, nobody tells them that sentences such as (49e) – repeated here as (73) – are ungrammatical (Brown & Hanlon 1970: 42–52; Marcus 1993).

(73) \* Is the dog that in the corner is hungry?

It is indeed correct that adults do not wake up their children with the ungrammatical sentence of the day, however, children do in fact have access to negative evidence of various sorts. For example, Chouinard & Clark (2003) have shown that English and French speaking parents correct the utterances of their children that are not well-formed. For example, they repeat utterances where the verb was inflected incorrectly. Children can deduce from the fact that the utterance was repeated and from what was changed in the repetition that they made a mistake and Chouinard and Clark also showed that they actually do this. The authors looked at data from five children whose parents all had an academic qualification. They discuss the parent-child relationship in other cultures, too (see Ochs (1982), Ochs & Schieffelin (1985) and Marcus (1993: 71) for an overview) and refer to studies of America families with lower socio-economic status (page 660).

A further form of negative evidence is indirect negative evidence, which Chomsky (1981a: 9) also assumes could play a role in acquisition. Goldberg (1995: Section 5.2) gives the utterance in (74a) as an example:<sup>44</sup>

	- b. \* The magician disappeared the bird.

<sup>44</sup>Also, see Tomasello (2006a: 277).

The child can conclude from the fact that adults use a more involved causative construction with *make* that the verb *disappear*, unlike other verbs such as *melt*, cannot be used transitively. An immediately instructive example for the role played by indirect negative evidence comes from morphology. There are certain productive rules that can however still not be applied if there is a word that blocks the application of the rule. An example is the -*er* nominalization suffix in German. By adding an -*er* to a verb stem, one can derive a noun that refers to someone who carries out a particular action (often habitually) (*Raucher* 'smoker', *Maler* 'painter', *Sänger* 'singer', *Tänzer* 'dancer'). However, *Stehler* 'stealer' is very unusual. The formation of *Stehler* is blocked by the existence of *Dieb* 'thief'. Language learners therefore have to infer from the non-existence of *Stehler* that the nominalization rule does not apply to *stehlen* 'to steal'.

Similarly, a speaker with a grammar of English that does not have any restrictions on the position of manner adverbs would expect that both orders in (75) are possible (Scholz & Pullum 2002: 206):

	- b. \* call immediately the police

Learners can conclude indirectly from the fact that verb phrases such as (75b) (almost) never occur in the input that these are probably not part of the language. This can be modeled using the relevant statistical learning algorithms.

The examples for the existence of negative evidence provided so far are more arguments from plausibility. Stefanowitsch (2008) has combined corpus linguistic studies on the statistical distribution with acceptability experiments and has shown that negative evidence gained from expected frequencies correlates with acceptability judgments of speakers. This process will be discussed now briefly: Stefanowitsch assumes the following principle:

(76) Form expectations about the frequency of co-occurrence of linguistic features or elements on the basis of their individual frequency of occurrence and check these expectations against the actual frequency of co-occurrence. (Stefanowitsch 2008: 518)

Stefanowitsch works with the part of the *International Corpus of English* that contains British English (ICE-GB). In this corpus, the verb *say* occurs 3,333 times and sentences with ditransitive verbs (Subj Verb Obj Obj) occur 1,824 times. The entire total of verbs in the corpus is 136,551. If all verbs occurred in all kinds of sentences with the same frequencies, then we would expect *say* to occur 44.52 times (X / 1,824 = 3,333 / 136,551 and hence X = 1,824 × 3,333 / 136,551) in the ditransitive construction. But the number of actual occurrences is actually 0 since, unlike (77b), sentences such as (77a) are not used by speakers of English.

	- b. Dad said something nice to Sue.

Stefanowitsch shows that the non-occurrence of *say* in the ditransitive sentence pattern is significant. Furthermore, he investigated how acceptability judgments compare to the frequent occurrence or non-occurrence of verbs in certain constructions. In a first experiment, he was able to show that the frequent non-occurrence of elements in particular constructions correlates with the acceptability judgments of speakers, whereas this is not the case for the frequent occurrence of a verb in a construction.

In sum, we can say that indirect negative evidence can be derived from linguistic input and that it seems to play an important role in language acquisition.

# **13.9 Summary**

It follows from all this that not a single one of the arguments in favor of innate linguistic knowledge remains uncontroversial. This of course does not rule out there still being innate linguistic knowledge but those who wish to incorporate this assumption into their theories have to take more care than was previously the case to prove that what they assume to be innate is actually part of our linguistic knowledge and that it cannot be learned from the linguistic input alone.

# **Comprehension questions**

1. Which arguments are there for the assumption of innate linguistic knowledge?

# **Further reading**

Pinker's book (1994) is the best written book arguing for nativist models of language.

Elman, Bates, Johnson, Karmiloff-Smith, Parisi & Plunkett (1996) discuss all the arguments that have been proposed in favor of innate linguistic knowledge and show that the relevant phenomena can be explained differently. The authors adopt a connectionist view. They work with neuronal networks, which

are assumed to model what is happening in our brains relatively accurately. The book also contains chapters about the basics of genetics and the structure of the brain, going into detail about why a direct encoding of linguistic knowledge in our genome is implausible.

Certain approaches using neuronal networks have been criticized because they cannot capture certain aspects of human abilities such as recursion or the multiple usage of the same words in an utterance. Pulvermüller (2010) discusses an architecture that has memory and uses this to analyze recursive structures. In his overview article, certain works are cited that show that the existence of more abstract rules or schemata of the kind theoretical linguists take for granted can be demonstrated on the neuronal level. Pulvermüller does not, however, assume that linguistic knowledge is innate (p. 173).

Pullum and Scholz have dealt with the Poverty-of-the-Stimulus argument in detail (Pullum & Scholz 2002, Scholz & Pullum 2002).

Goldberg (2006) and Tomasello (2003) are the most prominent proponents of Construction Grammar, a theory that explicitly tries to do without the assumption of innate linguistic knowledge.

# **14 Generative-enumerative vs. model-theoretic approaches**

Generative-enumerative approaches assume that a grammar generates a set of sequences of symbols (strings of words). This is where the term Generative Grammar comes from. Thus, it is possible to use the grammar on page 53, repeated here as (1), to derive the string *er das Buch dem Mann gibt* 'he the book the man gives'.


Beginning with the start symbol (S), symbols are replaced until one reaches a sequence of symbols only containing words. The set of all strings derived in this way is the language described by the grammar.

The following are classed as generative-enumerative approaches:


LFG was also originally designed to be a generative grammar.

The opposite of such theories of grammar are model-theoretic or constraint-based approaches (MTA). MTAs formulate well-formedness conditions on the expressions that the grammar describes. In Section 6.7, we already discussed a model-theoretic approach for theories that use feature structures to model phenomena. To illustrate this point, I will discuss another HPSG example: (2) shows the lexical item for *kennst* 'know'. In the description of (2), it is ensured that the phon value of the relevant linguistic sign is ⟨ *kennst* ⟩, that is, this value of phon is constrained. There are parallel restrictions for the features given in (2): the synsem value is given. In synsem, there are restrictions on the loc and nonloc value. In cat, there are individual restrictions for head and comps. The value of comps is a list with descriptions of dependent elements. The descriptions are given as abbreviations here, which actually stand for complex feature descriptions

#### 14 Generative-enumerative vs. model-theoretic approaches

that also consist of feature-value pairs. For the first argument of *kennst*, a head value of type *noun* is required, the per value in the semantic index has to be *second* and the num value has to be *sg*. The structure sharings in (2) are a special kind of constraint. Values that are not specified in the descriptions of lexical entries can vary in accordance with the feature geometry given by the type system. In (2), neither the slash value of the nominative NP nor the one of the accusative NP is fixed. This means that slash can either be an empty or non-empty list.

The constraints in lexical items such as (2) interact with further constraints that hold for the signs of type *phrase*. For instance, in head-argument structures, the non-head daughter must correspond to an element from the comps list of the head daughter.

Generative-enumerative and model-theoretic approaches view the same problem from different sides: the generative side only allows what can be generated by a given set of rules, whereas the model-theoretic approach allows everything that is not ruled out by constraints.<sup>1</sup>

Pullum & Scholz (2001: 19–20) and Pullum (2007) list the following model-theoretic approaches:<sup>2</sup>


<sup>1</sup>Compare this to an old joke: in dictatorships, everything that is not allowed is banned, in democracies, everything that is not banned is allowed and in France, everything that is banned is allowed. Generativeenumerative approaches correspond to the dictatorships, model-theoretic approaches are the democracies and France is something that has no correlate in linguistics.

<sup>2</sup> See Pullum (2007) for a historical overview of Model Theoretic Syntax (MTS) and for further references.


Categorial Grammars (Bouma & van Noord 1994), TAG (Rogers 1994) and Minimalist approaches (Veenstra 1998) can be formulated in model-theoretic terms.

Pullum & Scholz (2001) point out various differences between these points of view. In the following sections, I will focus on two of these differences.<sup>4</sup> Section 14.3 deals with ten Hacken's objection to the model-theoretic view.

# **14.1 Graded acceptability**

Generative-enumerative approaches differ from model-theoretic approaches in how they deal with the varying degrees of acceptability of utterances. In generative-enumerative approaches, a particular string is either included in the set of well-formed expressions or it is not. This means that it is not straightforwardly possible to say something about the degree of deviance: the first sentence in (3) is judged grammatical and the following three are equally ungrammatical.

	- b. \* Du you kennen know.3pl diesen this.acc Aufsatz. essay
	- c. \* Du you kennen know.3pl dieser this.nom Aufsatz. essay
	- d. \* Du you kennen know.3pl Aufsatz essay dieser. this.nom

At this point, critics of this view raise the objection that it is in fact possible to determine degrees of wellformedness in (3b–d): in (3b), there is no agreement between the subject and the verb, in (3c), *dieser Aufsatz* 'this essay' has the wrong case in addition, and in (3d), *Aufsatz* 'essay' and *dieser* 'this' occur in the wrong order. Furthermore, the sentence in (4) violates grammatical rules of German, but is nevertheless still interpretable.

<sup>3</sup>According to Pullum (2013: Section 3.2), there seems to be a problem for model-theoretic formalizations of so-called *constraining equations*.

<sup>4</sup> The reader should take note here: there are differing views with regard to how generative-enumerative and MTS models are best formalized and not all of the assumptions discussed here are compatible with every formalism. The following sections mirror the important points in the general discussion.

#### 14 Generative-enumerative vs. model-theoretic approaches

(4) Studenten students stürmen storm mit with Flugblättern flyers und and Megafon megaphone die the Mensa canteen und and rufen call alle all auf up zur to Vollversammlung plenary.meeting in in der the Glashalle glass.hall *zum* to.the *kommen*. come *Vielen* many.dat bleibt stays das the Essen food im in.the Mund mouth stecken stick und and *kommen* come *sofort* immediately *mit*. 5 with 'Students stormed into the university canteen with flyers and a megaphone calling for everyone to come to a plenary meeting in the glass hall. For many, the food stuck in their throats and they immediately joined them.'

Chomsky (1975: Chapter 5; 1964b) tried to use a string distance function to determine the relative acceptability of utterances. This function compares the string of an ungrammatical expression with that of a grammatical expression and assigns an ungrammaticality score of 1, 2 or 3 according to certain criteria. This treatment is not adequate, however, as there are much more fine-grained differences in acceptability and the string distance function also makes incorrect predictions. For examples of this and technical problems with calculating the function, see Pullum & Scholz (2001: 29).

In model-theoretic approaches, grammar is understood as a system of well-formedness conditions. An expression becomes worse, the more well-formedness conditions it violates (Pullum & Scholz 2001: 26–27). In (3b), the person and number requirements of the lexical item for the verb *kennst* are violated. In addition, the case requirements for the object have not been fulfilled in (3c). There is a further violation of a linearization rule for the noun phrase in (3d).

Well-formedness conditions can be weighted in such a way as to explain why certain violations lead to more severe deviations than others (Sorace & Keller 2005). Furthermore, performance factors also play a role when judging sentences (for more on the distinction between performance and competence, see Chapter 15). As we will see in Chapter 15, constraint-based approaches work very well as performance-compatible grammar models. If we combine the relevant grammatical theory with performance models, we will arrive at explanations for graded acceptability differences owing to performance factors.

# **14.2 Utterance fragments**

Pullum & Scholz (2001: Section 3.2) point out that generative-enumerative theories do not assign structure to fragments. For instance, neither the string *and of the* nor the string *the of and* would receive a structure since none of these sequences is well-formed as an utterance and they are therefore not elements of the set of sequences generated by the grammar. However, *and of the* can occur as part of the coordination of PPs in sentences such as (5) and would therefore have some structure in these cases, for example the one given in Figure 14.1 on the facing page.

<sup>5</sup> Streikzeitung der Universität Bremen, 04.12.2003, p. 2. The emphasis is mine.

(5) That cat is afraid of the dog and of the parrot.

Figure 14.1: Structure of the fragment *and of the* following Pullum & Scholz (2001: 32)

As a result of the interaction of various constraints in a constraint-based grammar, it emerges that *the* is part of an NP and this NP is an argument of *of* and furthermore *and* is combined with the relevant *of* -PP. In symmetric coordination, the first conjunct has the same syntactic properties as the second, which is why the partial structure of *and of the* allows one to draw conclusions about the category of the conjunct despite it not being part of the string.

Ewan Klein noted that Categorial Grammar and Minimalist Grammars, which build up more complex expressions from simpler ones, can sometimes create this kind of fragments (Pullum 2013: 507). This is certainly the case for Categorial Grammars with composition rules, which allow one to combine any sequence of words to form a constituent. If one views derivations as logical proofs, as is common in some variants of Categorial Grammar, then the actual derivation is irrelevant. What matters is whether a proof can be found or not. However, if one is interested in the derived structures, then the argument brought forward by Pullum and Scholz is still valid. For some variants of Categorial Grammar that motivate the combination of constituents based on their prosodic and information-structural properties (Steedman 1991: Section 3), the problem persists since fragments have a structure independent of the structure of the entire utterance and independent of their information-structural properties within this complete structure. This structure of the fragment can be such that it is not possible to analyze it with type-raising rules and composition rules.

In any case, this argument holds for Minimalist theories since it is not possible to have a combination of *the* with a nominal constituent if this constituent was not already built up from lexical material by Merge.

# **14.3 A problem for model-theoretic approaches?**

Ten Hacken (2007: 237–238) discusses the formal assumptions of HPSG. In HPSG, feature descriptions are used to describe feature structures. Feature structures must contain all the features belonging to a structure of a certain type. Additionally, the features have to have a maximally-specific value (see Section 6.7). Ten Hacken discusses gender properties of the English noun *cousin*. In English, gender is important in order to ensure the correct binding of pronouns (see page 284 for German):

	- b. The woman sleeps. He∗ snores.

While *he* in (6a) can refer to *man*, *woman* is not a possible antecedent. Ten Hacken's problem is that *cousin* is not marked with respect to gender. Thus, it is possible to use it to refer to both male and female relatives. As was explained in the discussion of the case value of *Frau* 'woman' in Section 6.7, it is possible for a value in a description to remain unspecified. Thus, in the relevant feature structures, any appropriate and maximally specific value is possible. The case of *Frau* can therefore be nominative, genitive, dative or accusative in an actual feature structure. Similarly, there are two possible genders for *cousin* corresponding to the usages in (7).

	- b. I have a cousin . She is very smart.

Ten Hacken refers to examples such as (8) and claims that these are problematic:

	- b. How many cousins does Niels have?

In plural usage, it is not possible to assume that *cousins* is feminine or masculine since the set of relatives can contain either women or men. It is interesting to note that (9a) is possible in English, whereas German is forced to use (9b) to express the same meaning.

	- b. Niels Niels und and Odette Odette sind are Cousin cousin.m und and Cousine. cousin.f

Ten Hacken concludes that the gender value has to remain unspecified and this shows, in his opinion, that model-theoretic analyses are unsuited to describing language.

If we consider what exactly ten Hacken noticed, then it becomes apparent how one can account for this in a model-theoretic approach: Ten Hacken claims that it does not make sense to specify a gender value for the plural form of *cousin*. In a model-theoretic approach, this can be captured in two ways. One can either assume that there are no gender features for referential indices in the plural, or that one can add a gender value that plural nouns can have.

The first approach is supported by the fact that there are no inflectional differences between the plural forms of pronouns with regard to gender. There is therefore no reason to distinguish genders in the plural.

	- b. The cousins/brothers/sisters are standing over there. They are very smart.

No distinctions are found in plural when it comes to nominal inflection (*brothers*, *sisters*, *books*). In German, this is different. There are differences with both nominal inflection and the reference of (some) noun phrases with regard to the sexus of the referent. Examples of this are the previously mentioned examples *Cousin* 'male cousin' and *Cousine* 'female cousin' as well as forms with the suffix -*in* as in *Kindergärtnerin* 'female nursery teacher'. However, gender is normally a grammatical notion that has nothing to do with sexus. An example is the neuter noun *Mitglied* 'member', which can refer to both female and male persons.

The question that one has to ask when discussing Ten Hacken's problem is the following: does gender play a role for pronominal binding in German? If this is not the case, then the gender feature is only relevant within the morphology component, and here the gender value is determined for each noun in the lexicon. For the binding of personal pronouns, there is no gender difference in German.

(11) Die the Schwestern sisters.f / Brüder brothers.m / Vereinsmitglieder club.members.n / Geschwister siblings stehen stand dort. there Sie they lächeln. smile.

'The sisters/brothers/club members/siblings are standing there. They are smiling.'

Nevertheless, there are adverbials in German that agree in gender with the noun to which they refer (Höhle 1983: Chapter 6):

	- b. Die the Türen doors.f wurden were eine one.f nach after der the anderen other geschlossen. closed 'The doors were closed one after the other.'
	- c. Die the Riegel bolts.m wurden were einer one.m nach after dem the anderen other zugeschoben. closed 'The bolts were closed one after the other.'

For animate nouns, it is possible to diverge from the gender of the noun in question and use a form of the adverbial that corresponds to the biological sex:

(13) a. Die the Mitglieder members.n des of.the Politbüros politburo wurden were eines one.n / einer one.m nach after dem the anderen other aus out.of dem the Saal hall getragen. carried

'The members of the politburo were carried out of the hall one after the other.'

b. Die the Mitglieder members.n des of.the Frauentanzklubs women's.dance.club verließen left eines one.n / eine one.f nach after dem the.n / der the.f anderen other im in.the Schutze protection der of.the Dunkelheit dark den the Keller. basement 'The members of the women's dance club left the basement one after the other under cover of darkness.'

This deviation from gender in favor of sexus can also be seen with binding of personal and relative pronouns with nouns such as *Weib* 'woman' (pej.) and *Mädchen* 'girl':

(14) a. "Farbe color bringt brings die the meiste most Knete!" money verriet revealed ein a 14jähriges 14-year.old türkisches Turkish *Mädchen*, girl.n *die* who.f die the Mauerstückchen wall.pieces am in.the Nachmittag afternoon am at Checkpoint Checkpoint Charlie Charlie an at Japaner Japanese und and US-Bürger US-citizens verkauft.<sup>6</sup> sells

> ' "Color gets the most money" said a 14-year old Turkish girl who sells pieces of the wall to Japanese and American citizens at Checkpoint Charlie.'

b. Es it ist is ein a junges young *Mädchen*, girl.n *die* who.f auf on der the Suche search nach for CDs CDs bei at Bolzes Bolzes reinschaut.<sup>7</sup>

stops.by

'It is a young girl looking for CDs that stops by Bolzes.'

For examples from Goethe, Kafka and Thomas Mann, see Müller (1999b: 417–418).

For inanimate nouns such as those in (12), agreement is obligatory. For the analysis of German, one therefore does in fact require a gender feature in the plural. In English, this is not the case since there are no parallel examples with pronouns inflecting for gender. One can therefore either assume that plural indices do not have a gender feature or that the gender value is *none*. In the latter case, the feature would have a value and hence fulfill the formal requirements. (15) shows the first solution: plural indices are modeled by feature structures of type *pl-ind* and the gender feature is just not appropriate for such objects.


The second solution requires the type hierarchy in Figure 14.2 on the next page for the subtypes of *gender*. With such a type hierarchy *none* is a possible value of the gen feature and no problem will arise.

<sup>6</sup> taz, 14.06.1990, p. 6.

<sup>7</sup> taz, 13.03.1996, p. 11.

Figure 14.2: Type hierarchy for one of the solutions of ten Hacken's problem

In general, it is clear that cases such as the one constructed by ten Hacken will never be a problem since there are either values that make sense, or there are contexts for which there is no value that makes sense and one therefore does not require the features.

So, while ten Hacken's problem is a non-issue, there are certain problems of a more technical nature. I have pointed out one such technical problem in Müller (1999b: Section 14.4). I show that spurious ambiguities arise for a particular analysis of verbal complexes in German when one resolves the values of a binary feature (flip). I also show how this problem can be avoided by the complicated stipulation of a value in certain contexts.

## **Further reading**

Pullum & Scholz (2001) is the main reference for a discussion of the model theoretic approach in comparison to generative-enumerative approaches.

# **15 The competence/performance distinction**

The distinction between competence and performance (Chomsky 1965: Section 1.1), which is assumed by several theories of grammar, was already discussed in Section 12.6.3 about the analysis of scrambling and verbal complexes in TAG. Theories of competence are intended to describe linguistic knowledge and performance theories are assigned the task of explaining how linguistic knowledge is used as well as why mistakes are made in speech production and comprehension. A classic example in the competence/ performance discussion are cases of center self-embedding. Chomsky & Miller (1963: 286) discuss the following example with recursively embedded relative clauses:

(1) (the rat (the cat (the dog chased) killed) ate the malt)

(2b) is a corresponding example in German:

(2) a. dass that der the Hund dog.m bellt, barks der that.m die the Katze cat jagt, chases die that.f die the Maus mouse kennt, knows die who im in.the Keller basement lebt lives

> 'that the dog that chases the cat that knows the mouse who is living in the basement is barking'

b. dass that er the Hund, dog [<sup>1</sup> der that die the Katze, cat [<sup>2</sup> die that die the Maus, mouse [<sup>3</sup> die who im in.the Keller basement lebt, lives 3 ] kennt, knows 2 ] jagt chases 1 ] bellt barks

The examples in (1) and (2b) are entirely incomprehensible for most people. If one rearranges the material somewhat, it is possible to process the sentences and assign a meaning to them.<sup>1</sup> For sentences such as (2b), it is often assumed that they fall within

<sup>1</sup> The sentence in (2a) can be continued following the pattern that was used to create the sentence. For instance by adding *die unter der Treppe lebte, die meine Freunde repariert haben* 'who lived under the staircase which my friends repaired'. This shows that a restriction of the number of elements that depend on one head to seven (Leiss 2003: 322) does not restrict the set of the sentences that are generated or licensed by a grammar to be finite. There are at most two dependents of each head in (2a). The extraposition of the relative clauses allows the hearer to group material into processable and reducible chunks, which reduces the cognitive burden during processing.

This means that the restriction to seven dependents does not cause a finitization of recursion ("Verendlichung von Rekursivität") as was claimed by Leiss (2003: 322). Leiss argued that Miller could not use his insights regarding short term memory, since he worked within Transformational Grammar rather than in Dependency Grammar. The discussion shows that dependency plays an important role, but that linear order is also important for processing.

our grammatical competence, that is, we possess the knowledge required to assign a structure to the sentence, although the processing of utterances such as (2b) exceeds language-independent abilities of our brain. In order to successfully process (2b), we would have to retain the first five noun phrases and corresponding hypotheses about the further progression of the sentence in our heads and could only begin to combine syntactic material when the verbs appear. Our brains become overwhelmed by this task. These problems do not arise when analyzing (2a) as it is possible to immediately begin to integrate the noun phrases into a larger unit.

Nevertheless, center self-embedding of relative clauses can also be constructed in such a way that our brains can handle them. Hans Uszkoreit (p. c. 2009) gives the following example:

(3) Die the Bänke, benches [1 auf on denen which damals back.then die the Alten old.people des of.the Dorfes, village [<sup>2</sup> die that allen all Kindern, children [<sup>3</sup> die that vorbeikamen came.by 3 ], freundliche friendly Blicke glances zuwarfen gave 2 ], lange long Stunden hours schweigend silent nebeneinander next.to.each.other saßen sat 1 ], mussten must im in.the letzten last Jahr year einem a Parkplatz weichen.

car.park give.way.to

'The benches on which the older residents of the village, who used to give friendly glances to all the children who came by, used to sit silently next to one another for hours had to give way to a car park last year.'

Therefore, one does not wish to include in the description of our grammatical knowledge that relative clauses are not allowed to be included inside each other as in (2b) as this would also rule out (3).

We can easily accept the fact that our brains are not able to process structures past a certain degree of complexity and also that corresponding utterances then become unacceptable. The contrast in the following examples is far more fascinating:<sup>2</sup>

	- b. \* The patient who the nurse who the clinic had hired met Jack.

Although (4a) is syntactically well-formed and (4b) is not, Gibson & Thomas (1999) were able to show that (4b) is rated better by speakers than (4a). It does not occur to some people that an entire VP is missing. There are a number of explanations for this fact, all of which in some way make the claim that previously heard words are forgotten as soon as new words are heard and a particular degree of complexity is exceeded (Frazier 1985: 178; Gibson & Thomas 1999).

Instead of developing grammatical theories that treat (2b) and (4a) as unacceptable and (3) and (4b) as acceptable, descriptions have been developed that equally allow (2b),

<sup>2</sup> See Gibson & Thomas (1999: 227). Frazier (1985: 178) attributes the discovery of this kind of sentences to Janet Fodor.

(3), and (4a) (competence models) and then additionally investigate the way utterances are processed in order to find out what kinds of structures our brains can handle and what kinds of structures it cannot. The result of this research is then a performance model (see Gibson (1998), for example). This does not rule out that there are languagespecific differences affecting language processing. For example, Vasishth, Suckow, Lewis & Kern (2010) have shown that the effects that arise in center self-embedding structures in German are different from those that arise in the corresponding English cases such as (4): due to the frequent occurrence of verb-final structures in German, speakers of German were able to better store predictions about the anticipated verbs into their working memory (p. 558).

Theories in the framework of Categorial Grammar, GB, LFG, GPSG and HPSG are theories about our linguistic competence.<sup>3</sup> If we want to develop a grammatical theory that directly reflects our cognitive abilities, then there should also be a corresponding performance model to go with a particular competence model. In the following two sections, I will recount some arguments from Sag & Wasow (2011) in favor of constraintbased theories such as GPSG, LFG and HPSG.

# **15.1 The derivational theory of complexity**

The first point discussed by Sag & Wasow (2011) is the Derivational Theory of Complexity. In the early days of Transformational Grammar, it was assumed that transformations were cognitively real, that is, it is possible to measure the consumption of resources that transformations have. A sentence that requires more transformations than the analysis of another sentence should therefore also be more difficult for humans to process. The corresponding theory was dubbed the *Derivational Theory of Complexity* (DTC) and initial experiments seemed to confirm it (Miller & McKean 1964, Savin & Perchonock 1965, Clifton & Odom 1966), so that in 1968 Chomsky still assumed that the Derivational

<sup>3</sup> For an approach where the parser is equated with UG, see Abney & Cole (1986: Section 3.4). For a performance-oriented variant of Minimalism, see Phillips (2003).

In Construction Grammar, the question of whether a distinction between competence and performance would be justified at all is controversially discussed (see Section 10.6.4.9.1). Fanselow, Schlesewsky, Cavar & Kliegl (1999) also suggest a model – albeit for different reasons – where grammatical properties considerably affect processing properties. The aforementioned authors work in the framework of Optimality Theory and show that the OT constraints that they assume can explain parsing preferences. OT is not a grammatical theory on its own but rather a meta theory. It is assumed that there is a component GEN that creates a set of candidates. A further component EVAL then chooses the most optimal candidate from this set of candidates. GEN contains a generative grammar of the kind that we have seen in this book. Normally, a GP/MP variant or also LFG is assumed as the base grammar. If one assumes a transformational theory, then one automatically has a problem with the Derivational Theory of Complexity that we will encounter in the following section. If one wishes to develop OT parsing models, then one has to make reference to representational variants of GB as the aforementioned authors seem to.

Theory of Complexity was in fact correct (Chomsky 1976a: 249–250).<sup>4</sup> Some years later, however, most psycholinguists rejected the DTC. For discussion of several experiments that testify against the DTC, see Fodor, Bever & Garrett (1974: 320–328). One set of phenomena where the DTC makes incorrect predictions for respective analyses is that of elliptical constructions, for example (Fodor, Bever & Garrett 1974: 324): in elliptical constructions, particular parts of the utterance are left out or replaced by auxiliaries. In transformation-based approaches, it was assumed that (5b) is derived from (5a) by means of deletion of *swims* and (5c) is derived from (5b) by inserting *do*. 5

	- b. John swims faster than Bob.
	- c. John swims faster than Bob does.

The DTC predicts that (5b) should require more time to process than (5a), since the analysis of (5b) first requires to build up the structure in (5a) and then delete *swims*. This prediction was not confirmed.

Similarly, no difference could be identified for the pairs in (6) and (7) even though one of the sentences, given the relevant theoretical assumptions, requires more transformations for the derivation from a base structure (Fodor, Bever & Garrett 1974: 324).

	- b. John phoned the girl up.
	- b. The bus driver was fired after the wreck.

In (6), we are dealing with local reordering of the particle and the object. (7b) contains a passive clause that should be derived from an active clause under Transformational Grammar assumptions. If we compare this sentence with an equally long sentence with

<sup>4</sup> In the Transformational Grammar literature, transformations were later viewed as a metaphor (Lohnstein 2014: 170, also in Chomsky 2001: Footnote 4), that is, it was no longer assumed to have psycholinguistic reality. In *Derivation by phase* and *On phases*, Chomsky refers once again to processing aspects such as computational and memory load (Chomsky 2001: 11, 12, 15; 2007: 3, 12; 2008: 138, 145, 146, 155). See also Marantz (2005: 440) and Richards (2015). Trinh (2011: 17; 2019: 9) cites Chomsky (p.c.) with the following quote: "As speaking involves cognitive effort, Pronunciation Economy might be derived from the general principle of minimizing computation."

A structure building operation that begins with words and is followed by transformations/internal merge and further combinations, as recently assumed by theories in the Minimalist Program, is psycholinguistically implausible for sentence parsing. See Labelle (2007) and Section 15.2 for more on incremental processing.

Chomsky (2007: 6) (written later than *On phases*) seems to adopt a constraint-based view. He writes that "a Merge-based system involves parallel operations" and compares the analysis of an utterance with a proof and explicitly mentions the competence/performance distinction.

<sup>5</sup> Similar analyses are assumed today in the Minimalist Program. For example, Trinh (2011: 63) assumes that VP ellipsis is deletion at Phonological Form (PF). This means that a complete structure is built which is then not pronounced. Since he talks about cognitive efforts and computation with respect to the activity of speaking (p. 17), it follows that he regards the structures he is assuming as congnitively real.

an adjective, like (7a), the passive clause should be more difficult to process. This is, however, not the case.

It is necessary to add two qualifications to Sag & Wasow's claims: if one has experimental data that show that the DTC makes incorrect predictions for a particular analysis, this does not necessarily mean that the DTC has been disproved. One could also try to find a different analysis for the phenomenon in question. For example, instead of a transformation that deletes material, one could assume empty elements for the analysis of elliptical structures that are inserted directly into the structure without deleting any material (see page 68 for the assumption of an empty nominal head in structures with noun ellipsis in German). Data such as (5) would then be irrelevant to the discussion.<sup>6</sup> However, reordering such as (6b) and the passive in (7b) are the kinds of phenomena that are typically explained using transformations.

The second qualification pertains to analyses for which there is a representational variant: it is often said that transformations are simply metaphors (Jackendoff 2000: 22–23; 2007: 5, 20): for example, we have seen that extractions with a transformational grammar yield structures that are similar to those assumed in HPSG. Figure 15.1 shows cyclic movement in GB theory compared to the corresponding HPSG analysis.

Figure 15.1: Cyclic movement vs. feature percolation

In GB, an element is moved to the specifier position of CP (SpecCP) and can then be moved from there to the next higher SpecCP position.

	- b. Chris , we think [CP/NP Anna claims [CP/NP that David saw \_ ]]. (HPSG)

In HPSG, the same effect is achieved by structure sharing. Information about a longdistance dependency is not located in the specifier node but rather in the mother node

<sup>6</sup>Culicover & Jackendoff (2005: Chapters 1 and 7) argue in favor of analyzing ellipsis as a semantic or pragmatic phenomenon rather than a syntactic one anyway.

of the projection itself. In Section 19.2, I will discuss various ways of eliminating empty elements from grammars. If we apply these techniques to structures such as the GB structure in Figure 15.1, then we arrive at structures where information about missing elements is integrated into the mother node (CP) and the position in SpecCP is unfilled. This roughly corresponds to the HPSG structure in Figure 15.1.<sup>7</sup> It follows from this that there are classes of phenomena that can be spoken about in terms of transformations without expecting empirical differences with regard to performance when compared to transformation-less approaches. However, it is important to note that we are dealing with an S-structure in the left-hand tree in Figure 15.1. As soon as one assumes that this is derived by moving constituents out of other structures, this equivalence of approaches disappears.

# **15.2 Incremental processing**

The next important point mentioned by Sag & Wasow (2011) is the fact that both comprehension and production of language take places incrementally. As soon as we hear or read even the beginning of a word, we begin to assign meaning and to create structure. In the same way, we sometimes start talking before we have finished planning the entire utterance. This is shown by interruptions and self-correction in spontaneous speech (Clark & Wasow 1998, Clark & Fox Tree 2002). When it comes to processing spoken speech, Tanenhaus et al. (1996) have shown that we access a word as soon as we have heard a part of it (see also Marslen-Wilson 1975). The authors of the study carried out an experiment where participants were instructed to pick up particular objects on a grid and reorganize them. Using eye-tracking measurements, Tanenhaus and colleagues could then show that the participants could identify the object in question earlier if the sound sequence at the beginning of the word was unambiguous than in cases where the initial sounds occurred in multiple words. An example for this is a configuration with a candle and candy: *candy* and *candle* both begin with *can* such that speakers could not yet decide upon hearing this sequence which lexical entry should be accessed. Therefore, there was a slight delay in accessing the lexical entry when compared to words where the objects in question did not contain the same segment at the start of the word (Tanenhaus et al. 1995: 1633).

If complex noun phrases were used in the instructions (*Touch the starred yellow square*), the participants' gaze fell on the object in question 250ms after it was unambiguously identifiable. This means that if there was only a single object with stars on it, then they looked at it after they heard *starred*. In cases where there were starred yellow blocks as well as squares, they looked at the square only after they had processed the word *square* (Tanenhaus et al. 1995: 1632). The planning and execution of a gaze lasts 200ms. From this, one can conclude that hearers combine words directly and as soon as enough information is available, they create sufficient structure in order to capture the (potential) meaning of an expression and react accordingly. This finding is incompatible with models that assume that one must have heard a complete noun phrase or even a complete

<sup>7</sup> In Figure 15.1, additionally the unary branching of C′ to CP was omitted in the tree on the right so that C combines directly with VP/NP to form CP/NP.

utterance of even more complexity before it is possible to conclude anything about the meaning of a phrase/utterance. In particular, analyses in the Minimalist Program which assume that only entire phrases or so-called phases<sup>8</sup> are interpreted (see Chomsky 1999 and also Marantz 2005: 441, who explicitly contrasts the MP to Categorial Grammar) must therefore be rejected as inadequate from a psycholinguistic perspective.9,<sup>10</sup>

With contrastive emphasis of individual adjectives in complex noun phrases (e.g., *the BIG blue triangle*), hearers assumed that there must be a corresponding counterpart to the reference object, e.g., a small blue triangle. The eye-tracking studies carried out by Tanenhaus et al. (1996) have shown that taking this kind of information into account results in objects being identified more quickly.

Similarly, Arnold et al. (2004) have shown, also using eye-tracking studies, that hearers tend to direct their gaze to previously unmentioned objects if the interlocutor interrupts their speech with *um* or *uh*. This can be traced back to the assumption that hearers assume that describing previously unmentioned objects is more complex than referring to objects already under discussion. The speaker can create more time for himself by using *um* or *uh*.

Examples such as those above constitute evidence for approaches that assume that when processing language, information from all available channels is used and that this information is also used as soon as it is available and not only after the structure of the entire utterance or complete phrase has been constructed. The results of experimental research therefore show that the hypothesis of a strictly modular organization of linguistic knowledge must be rejected. Proponents of this hypothesis assume that the output of one module constitutes the input of another without a given module having access to the inner states of another module or the processes taking place inside it. For example, the morphology module could provide the input for syntax and then this would be processed later by the semantic module. One kind of evidence for this kind of organization of linguistic knowledge that is often cited are so-called *garden path sentences* such as (9):

	- b. The boat floated down the river sank.

The vast majority of English speakers struggle to process these sentences since their parser is led down a garden path as it builds up a complete structure for (10a) or (10b) only then to realize that there is another verb that cannot be integrated into this structure.

<sup>8</sup>Usually, only CP and vP are assumed to be phases.

<sup>9</sup> Sternefeld (2006: 729–730) points out that in theories in the Minimalist Program, the common assumption of uninterpretable features is entirely unjustified. Chomsky assumes that there are features that have to be deleted in the course of a derivation since they are only relevant for syntax. If they are not checked, the derivation crashes at the interface to semantics. It follows from this that NPs should not be interpretable under the assumptions of these theories since they contain a number of features that are irrelevant for the semantics and have to therefore be deleted (see Section 4.1.2 of this book and Richards 2015). As we have seen, these kinds of theories are incompatible with the facts.

<sup>10</sup>It is sometimes claimed that current Minimalist theories are better suited to explain production (generation) than perception (parsing). But these models are as implausible for generation as they are for parsing. The reason is that it is assumed that there is a syntax component that generates structures that are then shipped to the interfaces. This is not what happens in generation though. Usually speakers know what they want to say (at least partly), that is, they start with semantics.

	- b. The boat floated down the river.

However, the actual structure of (9) contains a reduced relative clause (*raced past the barn* or *floated down the river*). That is the sentences in (9a) are semantically equivalent to the sentences in (11):

	- b. The boat that was floated down the river sank.

The failure of the parser in these cases was explained by assuming that syntactic processing such as constructing a sentence from NP and VP take place independently of the processing of other constraints. As Crain & Steedman (1985) and others have shown, yet there are data that make this explanation seem less plausible: if (9a) is uttered in a relevant context, the parser is not misled. In (12), there are multiple horses under discussion and each NP is clearly identified by a relative clause. The hearer is therefore prepared for a relative clause and can process the reduced relative clause without being led down the garden path, so to speak.

(12) The horse that they raced around the track held up fine. The horse that was raced down the road faltered a bit. And the horse raced past the barn fell.

By exchanging lexical material, it is also possible to modify (9a) in such way as to ensure that processing is unproblematic without having to add additional context. It is necessary to choose the material so that the interpretation of the noun as the subject of verb in the reduced relative clause is ruled out. Accordingly, *evidence* in (13) refers to an inanimate noun. It is therefore not a possible agent of *examined*. A hypothesis with *evidence* as the agent of *examined* is therefore never created when processing this sentence (Sag & Wasow 2011).

(13) The evidence examined by the judge turned out to be unreliable.

Since processing proceeds incrementally, it is sometimes assumed that realistic grammars should be obliged to immediately assign a constituent structure to previously heard material (Ades & Steedman 1982, Steedman 1989b, Hausser 1992). Proponents of this view would assume a structure for the following sentence where every word forms a constituent with the preceding material:

(14) [[[[[[[[[[[[[[Das the britische] British Finanzministerium] treasury stellt] provides dem] the angeschlagenen] crippled Bankensystem] banking.system des] of.the Landes] country mindestens] at.least 200] 200 Milliarden] billion Pfund] pounds zur] to Verfügung]. use

'The British Treasury is making at least 200 billion pounds available to the crippled banking system.'

Pulman (1985), Stabler (1991) and Shieber & Johnson (1993: 301–308) have shown, however, that it is possible to build semantic structures incrementally, using the kind of

phrase structure grammars we encountered in Chapter 2. This means that a partial semantic representation for the string *das britische* 'the British' can be computed without having to assume that the two words form a constituent in (14). Therefore, one does not necessarily need a grammar that licenses the immediate combination of words directly. Furthermore, Shieber & Johnson (1993) point out that from a purely technical point of view, synchronous processing is more costly than asynchronous processing since synchronous processing requires additional mechanisms for synchronization whereas asynchronous processing processes information as soon as it becomes available (p. 297–298). Shieber and Johnson do not clarify whether this also applies to synchronous/asynchronous processing of syntactic and semantic information. See Shieber & Johnson (1993) for incremental processing and for a comparison of Steedman's Categorial Grammar and TAG.

What kind of conclusions can we draw from the data we have previously discussed? Are there further data that can help to determine the kinds of properties a theory of grammar should have in order to count as psycholinguistically plausible? Sag, Wasow & Bender (2003) and Sag & Wasow (2011, 2015) list the following properties that a performance-compatible competence grammar should have:<sup>11</sup>


Approaches such as CG, GPSG, LFG, HPSG, CxG and TAG are surface-oriented since they do not assume a base structure from which other structures are derived via transformations. Transformational approaches, however, require additional assumptions.<sup>12</sup> This will be briefly illustrated in what follows. In Section 3.1.5, we encountered the following analysis of English interrogatives:

(15) [CP What [C ′ will [IP Ann [<sup>I</sup> ′ \_ [VP read \_ ]]]]].

(i) Wallace saw Gromit in the kitchen.

<sup>11</sup>Also, see Jackendoff (2007) for reflections on a performance model for a constraint-based, surface-oriented linguistic theory.

<sup>12</sup>An exception among transformational approaches is Phillips (2003). Phillips assumes that structures relevant for phenomena such as ellipsis, coordination and fronting are built up incrementally. These constituents are then reordered in later steps by transformations. For example, in the analysis of (i), the string *Wallace saw Gromit in* forms a constituent where *in* is dominated by a node with the label P(P). This node is then turned into a PP in a subsequent step (p. 43–44).

While this approach is a transformation-based approach, the kind of transformation here is very idiosyncratic and incompatible with other variants of the theory. In particular, the modification of constituents contradicts the assumption of Structure Preservation when applying transformations as well as the *No Tampering Condition* of Chomsky (2008). Furthermore, the conditions under which an incomplete string such as *Wallace saw Gromit in* forms a constituent are not entirely clear.

This structure is derived from (16a) by two transformations (two applications of Move-):

	- b. \* Will Ann read what

The first transformation creates the order in (16b) from (16a), and the second creates (15) from (16b).

When a hearer processes the sentence in (15), he begins to build structure as soon as he hears the first word. Transformations can, however, only be carried out when the entire utterance has been heard. One can, of course, assume that hearers process surface structures. However, since – as we have seen – they begin to access semantic knowledge early into an utterance, this begs the question of what we need a deep structure for at all.

In analyses such as those of (15), deep structure is superfluous since the relevant information can be reconstructed from the traces. Corresponding variants of GB have been proposed in the literature (see page 123). They are compatible with the requirement of being surface-oriented. Chomsky (1981a: 181; 1986a: 49) and Lasnik & Saito (1992: 59–60) propose analyses where traces can be deleted. In these analyses, the deep structure cannot be directly reconstructed from the surface structure and one requires transformations in order to relate the two. If we assume that transformations are applied 'online' during the analysis of utterances, then this would mean that the hearer would have to keep a structure derived from previously heard material as well as a list of possible transformations during processing in his working memory. In constraint-based grammars, entertaining hypotheses about potential upcoming transformation steps is not necessary since there is only a single surface structure that is processed directly. At present, it is still unclear whether it is actually possible to distinguish between these models empirically. But for Minimalist models with a large number of movements (see Figure 4.20 on page 149, for example), it should be clear that they are unrealistic since storage space is required to manage the hypotheses regarding such movements and we know that such short-term memory is very limited in humans.

Frazier & Clifton (1996: 27) assume that a transformation-based competence grammar yields a grammar with pre-compiled rules or rather templates that is then used for parsing. Therefore, theorems derived from UG are used for parsing and not axioms of UG directly. Johnson (1989) also suggests a parsing system that applies constraints from different sub-theories of GB as early as possible. This means that while he does assume the levels of representation D-structure, S-structure, LF and PF, he specifies the relevant constraints (X theory, Theta-Theory, Case Theory, …) as logical conditions that can be reorganized, then be evaluated in a different but logically equivalent order and be used for structure building.<sup>13</sup> Chomsky (2007: 6) also compares human parsing to working through a proof, where each step of the proof can be carried out in different orders. This view does not assume the psychological reality of levels of grammatical representation when processing language, but simply assumes that principles and structures play

<sup>13</sup>Stabler (1992: Section 15.7) also considers a constraint-based view, but arrives at the conclusion that parsing and other linguistic tasks should use the structural levels of the competence theory. This would again pose problems for the DTC.

a role when it comes to language acquisition. As we have seen, the question of whether we need UG to explain language acquisition was not yet decided in favor of UG-based approaches. Instead, all available evidence seems to point in the opposite direction. However, even if innate linguistic knowledge does exist, the question arises as to why one would want to represent this as several structures linked via transformations when it is clear that these do not play a role for humans (especially language learners) when processing language. Approaches that can represent this knowledge using fewer technical means, e.g., without transformations, are therefore preferable. For more on this point, see Kuhn (2007: 615).

The requirement for constraint-based grammars is supported by incremental processing and also by the ability to deduce what will follow from previously heard material. Stabler 1991 has pointed out that Steedman's (1989b) argumentation with regard to incrementally processable grammars is incorrect, and instead argues for maintaining a modular view of grammar. Stabler has developed a constraint-based grammar where syntactic and semantic knowledge can be accessed at any time. He formulates both syntactic structures and the semantic representations attached to them as conjoined constraints and then presents a processing system that processes structures based on the availability of parts of syntactic and semantic knowledge. Stabler rejects models of performance that assume that one must first apply all syntactic constraints before the semantic ones can be applied. If one abandons this strict view of modularity, then we arrive at something like (17):

(17) (Syn<sup>1</sup> ∧ Syn<sup>2</sup> ∧ … ∧ Syn ) ∧ (Sem<sup>1</sup> ∧ Sem<sup>2</sup> ∧ … ∧ Sem )

Syn1–Syn stand for syntactic rules or constraints and Sem1–Sem stand for semantic rules or constraints. If one so desires, the expressions in brackets can be referred to as modules. Since it is possible to randomly reorder conjoined expressions, one can imagine performance models that first apply some rules from the syntax module and then, when enough information is present, respective rules from the semantic module. The order of processing could therefore be as in (18), for example:

(18) Syn<sup>2</sup> ∧ Sem<sup>1</sup> ∧ Syn<sup>1</sup> ∧ … ∧ Syn ∧ Sem<sup>2</sup> ∧ … ∧ Sem

If one subscribes to this view of modularity, then theories such as HPSG or CxG also have a modular structure. In the representation assumed in the HPSG variant of Pollard & Sag (1987) and Sign-Based CxG (see Section 10.6.2), the value of syn would correspond to the syntax module, the value of sem to the semantic module and the value of phon to the phonology module. If one were to remove the respective other parts of the lexical entries/dominance schemata, then one would be left with the part of the theory corresponding exactly to the level of representation in question.<sup>14</sup> Jackendoff (2000) argues

<sup>14</sup>In current theories in the Minimalist Program, an increasing amount of morphological, syntactic, semantic and information-structural information is being included in analyses (see Section 4.6.1). While there are suggestions for using feature-value pairs (Sauerland & Elbourne 2002: 290–291), a strict structuring of information as in GPSG, LFG, HPSG, CxG and variants of CG and TAG is not present. This means that there are the levels for syntax, Phonological Form and Logical Form, but the information relevant for these levels is an unstructured part of syntax, smeared all over syntactic trees.

#### 15 The competence/performance distinction

for this form of modularity with the relevant interfaces between the modules for phonology, syntax, semantics and further modules from other areas of cognition. Exactly what there is to be gained from assuming these modules and how these could be proved empirically remains somewhat unclear to me. For skepticism with regard to the very concept of modules, see Jackendoff (2000: 22, 27). For more on interfaces and modularization in theories such as LFG and HPSG, see Kuhn (2007).

Furthermore, Sag & Wasow (2015: 53–54) argue that listeners often leave semantic interpretation underspecified until enough information is present either in the utterance itself or the context. They do not commit to a certain reading early and run into garden paths or backtrack to other readings. This is modeled appropriately by theories that use a variant of underspecified semantics. For a concrete example of underspecification in semantics see Section 19.3.

In conclusion, we can say that surface-oriented, model-theoretic and strongly lexicalist grammatical theories such as CG, LFG, GPSG, HPSG, CxG and the corresponding GB/MP variants (paired with appropriate semantic representations) can plausibly be combined with processing models, while this is not the case for the overwhelming majority of GB/MP theories.

# **16 Language acquisition**

Linguists and philosophers are fascinated by the human ability to acquire language. Assuming the relevant input during childhood, language acquisition normally takes place completely effortlessly. Chomsky (1965: 24–25) put forward the requirement that a grammatical theory must provide a plausible model of language acquisition. Only then could it actually explain anything and would otherwise remain descriptive at best. In this section, we will discuss theories of acquisition from a number of theoretical standpoints.

# **16.1 Principles & Parameters**

A very influential explanation of language acquisition is Chomsky's Principles & Parameters model (1981a). Chomsky assumes that there is an innate Universal Grammar that contains knowledge that is equally relevant for all languages. Languages can then vary in particular ways. For every difference between languages in the area of core grammar, there is a feature with a specific value. Normally, the value of a parameter is binary, that is, the value is either '+' or '−'. Depending on the setting of a parameter, a language will have certain properties, that is, setting a parameter determines whether a language belongs to a particular class of languages. Parameters are assumed to influence multiple properties of a grammar simultaneously (Chomsky 1981a: 6). For example, Rizzi (1986) claims that the pro-drop parameter affects whether referential subjects can be omitted, the absence of expletives, subject extraction from clauses with complementizers (*that*-t contexts) and interrogatives and finally the possibility of realizing the subject postverbally in VO-languages (see Chomsky 1981a: Section 4.3; Meisel 1995: 12). It has been noted that there are counter-examples to all the correlations assumed.<sup>1</sup> Another example of a parameter is the Head Directionality Parameter discussed in Section 13.1.1. As was shown, there are languages where heads govern in different directions. In his overview article, Haider (2001) still mentions the parametrized Subjacency Principle but notes that subjacency is no longer assumed as a principle in newer versions of the theory (see Section 13.1.5.2 for more on subjacency).

Snyder (2001) discovered a correlation of various phenomena with productive root compounding as it is manifested for instance in compounding of two nouns. He argues

<sup>1</sup> See Haider (1994) and Haider (2001: Section 2.2) for an overview. Haider assumes that there is at least a correlation between the absence of expletive subjects and pro-drop. However, Galician is a pro-drop language with expletive subject pronouns (Raposo & Uriagereka 1990: Section 2.5). Franks (1995: 314) cites Upper and Lower Sorbian as pro-drop languages with expletive subjects. Scholz & Pullum (2002: 218) point out that there is an expletive pronoun *ci* in modern Italian although Italian is classed as a pro-drop language.

#### 16 Language acquisition

that the acquisition of complex predicate formation is connected to the acquisition of compound structures and that there is a parameter that is responsible for this type of compounding and simultaneously for the following set of phenomena:


Snyder examined languages from various language groups: Afroasiatic, Austroasiatic, Austronesian, Finno-Ugric, Indo-European (Germanic, Romance, Slavic), Japanese-Korean, Niger-Kordofanian (Bantu), and Sino-Tibetan, as well as American Sign Language and the language isolate Basque. The languages that were examined either had all of these phenomena or none. This was tested with native speakers of the respective languages. In addition the claim that these phenomena are acquired once noun-noun compounds are used productively was tested for English using CHILDES data. The result was positive with the exception of the double object construction, for which an explanation was provided. The correlation of the phenomena in (1) is interesting and was interpreted as proof of the existence of a parameter that correlates several phenomena in a language. However, Son (2007) and Son & Svenonius (2008) showed that Snyder's claims for Japanese were wrong and that there are further languages like Korean, Hebrew, Czech, Malayalam, Javanese in which some of the phenomena show no correlations.

Gibson & Wexler (1994) discuss the acquisition of constituent order and assume three parameters that concern the position of the verb relative to the subject (SV vs. VS) and relative to the object (VO vs. OV) as well as the V2-property. There is no consensus in the literature about which parameters determine the make-up of languages (see Newmeyer 2005: Section 3.2 and Haspelmath 2008 for an overview and critical discussion). Fodor (1998a: 346–347) assumes that there are 20 to 30 parameters, Gibson & Wexler (1994: 408) mention the number 40, Baker (2003: 349) talks of 10 to 20 and Roberts & Holmberg (2005: 541) of 50 to 100. There is no consensus in the literature as to which parameters one should assume, how they interact and what they predict. However, it is nevertheless possible to contemplate how a grammar of an individual language could be derived from a UG with parameters that need to be set. Chomsky's original idea (1986b: Section 3.5.1) was that the child sets the value of a parameter based on the language input as soon as the relevant evidence is present from the input (see also Gibson & Wexler 1994, Nowak, Komarova & Niyogi 2001). At a given point in time, the learner has a grammar with certain parameter settings that correspond to the input seen so far. In order to fully acquire a grammar, all parameters must be assigned a value. In theory, thirty utterances should be enough to acquire a grammar with thirty parameters if these utterances provide unambiguous evidence for a particular parameter value.

This approach has often been criticized. If setting a parameter leads to a learner using a different grammar, one would expect sudden changes in linguistic behavior. This is, however, not the case (Bloom 1993: 731). Fodor (1998a: 343–344) also notes the following three problems: 1) Parameters can affect things that are not visible from the perceptible constituent order. 2) Many sentences are ambiguous with regard to the setting of a particular parameter, that is, there are sometimes multiple combinations of parameters compatible with one utterance. Therefore, the respective utterances cannot be used to set any parameters (Berwick & Niyogi 1996, Fodor 1998b). 3) There is a problem with the interaction of parameters. Normally multiple parameters play a role in an utterance such that it can be difficult to determine which parameter contributes what and thus how the values should be determined.

Points 1) and 2) can be explained using the constituent order parameters of Gibson & Wexler: imagine a child hears sentences such as the English and the German examples in (2):

	- b. Papa daddy trinkt drinks Saft. juice

These sentences look exactly the same, even though radically different structures are assumed for each. According to the theories under discussion, the English sentence has the structure shown in Figure 3.9 on page 100 given in abbreviated form in (3a). The German sentence, on the other hand, has the structure in Figure 3.14 on page 108 corresponding to (3b):

(3) a. [IP [Daddy [<sup>I</sup> ′ \_ [VP drinks juice]]]. b. [CP Papa [C ′ trinkt [IP \_ [I ′ [VP Saft \_ ] \_ ]]]].

English has the basic constituent order SVO. The verb forms a constituent with the object (VP) and this is combined with the subject. The parameter setting must therefore be SV, VO and −V2. German, on the other had, is analyzed as a verb-final and verb-second language and the parameter values would therefore have to be SV, OV and +V2. If we consider the sentences in (2), we see that both sentences do not differ from one another with regard to the order of the verb and its arguments.

Fodor (1998a,b) concludes from this that one first has to build a structure in order to see what grammatical class the grammar licensing the structure belongs to since one first needs the structure in (3b) in order to be able to see that the verb in the partial constituent occurs after its argument in the VP (Saft \_ ). The question is now how one achieves this structure. A UG with 30 parameters corresponds to 2<sup>30</sup> = 1,073,741,824 fully instantiated grammars. It is an unrealistic assumption that children try out these grammars successively or simultaneously.

Gibson & Wexler (1994) discuss a number of solutions for this problem: parameters have a default value and the learner can only change a parameter value if a sentence that could previously not be analyzed can then be analyzed with the new parameter setting (*Greediness Constraint*). In this kind of procedure, only one parameter can be changed at a time (*Single Value Constraint*), which aims at ruling out great leaps leading to extremely different grammars (see Berwick & Niyogi 1996: 612–613, however). This reduces the processing demands, however with 40 parameters, the worst case could still be that one has to test 40 parameter values separately, that is, try to parse a sentence with

#### 16 Language acquisition

40 different grammars. This processing feat is still unrealistic, which is why Gibson & Wexler (1994: 442) additionally assume that one hypothesis is tested per input sentence. A further modification of the model is the assumption that certain parameters only begin to play a role during the maturation of the child. At a given point in time, there could be only a few accessible parameters that also need to be set. After setting these parameters, new parameters could become available.

In their article, Gibson & Wexler show that the interaction between input and parameter setting is in no way trivial. In their example scenario with three parameters, a situation can arise in which a learner sets a parameter in order to analyze a new sentence, however setting this parameter leads to the fact that the target grammar cannot be acquired because only one value can be changed at a time and changes can only be made if more sentences can be analyzed than before. The learner reaches a so-called local maximum in these problematic cases.<sup>2</sup> Gibson & Wexler then suggest assigning a default value to particular parameters, whereby the default value is the one that will cause the learner to avoid problematic situations. For the V2 parameter, they assume '−' as the default value.

Berwick & Niyogi (1996) show that Gibson & Wexler calculated the problematic conditions incorrectly and that, if one shares their assumptions, it is even more frequently possible to arrive at parameter combinations from which it is not possible to reach the target grammar by changing individual parameter values. They show that one of the problematic cases not addressed by Gibson & Wexler is −V2 (p. 609) and that the assumption of a default value for a parameter does not solve the problem as both '+' and '–' can lead to problematic combinations of parameters.<sup>3</sup> In their article, Berwick and Niyogi show that learners in the example scenario above (with three parameters) learn the target grammar faster if one abandons the Greediness or else the Single Value Constraint. They suggest a process that simply randomly changes one parameter if a sentence cannot be analyzed (*Random Step*, p. 615–616). The authors note that this approach does not share the problems with the local maxima that Gibson & Wexler had in their example and that it also reaches its goal faster than theirs. However, the fact that *Random Step* converges more quickly has to do with the quality of the parameter space (p. 618). Since there is no consensus about parameters in the literature, it is not possible to assess how the entire system works.

Yang (2004: 453) has criticized the classic Principles & Parameters model since abrupt switching between grammars after setting a parameter cannot be observed. Instead, he proposes the following learning mechanism:

(4) For an input sentence, , the child: (i) with probability P selects a grammar G , (ii) analyzes with G , (iii) if successful, reward G by increasing P , otherwise punish G by decreasing P .

<sup>2</sup> If one imagines the acquisition process as climbing a hill, then the Greediness Constraint ensures that one can only go uphill. It could be the case, however, that one begins to climb the wrong hill and can no longer get back down.

<sup>3</sup>Kohl (1999, 2000) has investigated this acquisition model in a case with twelve parameters. Of the 4096 possible grammars, 2336 (57%) are unlearnable if one assumes the best initial values for the parameters.

Yang discusses the example of the pro-drop and topic drop parameters. In pro-drop languages (e.g., Italian), it is possible to omit the subject and in topic drop languages (e.g., Mandarin Chinese), it possible to omit both the subject and the object if it is a topic. Yang compares English-speaking and Chinese-speaking children noting that English children omit both subjects and objects in an early linguistic stage. He claims that the reason for this is that English-speaking children start off using the Chinese grammar.

The pro-drop parameter is one of the most widely discussed parameters in the context of Principles & Parameters theory and it will therefore be discussed in more detail here. It is assumed that speakers of English have to learn that all sentences in English require a subject, whereas speakers of Italian learn that subjects can be omitted. One can observe that children learning both English and Italian omit subjects (German children too in fact). Objects are also omitted notably more often than subjects. There are two possible explanations for this: a competence-based one and a performance-based one. In competence-based approaches, it is assumed that children use a grammar that allows them to omit subjects and then only later acquire the correct grammar (by setting parameters or increasing the rule apparatus). In performance-based approaches, by contrast, the omission of subjects is traced back to the fact that children are not yet capable of planning and producing long utterances due to their limited brain capacity. Since the cognitive demands are greatest at the beginning of an utterance, this leads to subjects beings increasingly left out. Valian (1991) investigated these various hypotheses and showed that the frequency with which children learning English and Italian respectively omit subjects is not the same. Subjects are omitted more often than objects. She therefore concludes that competence-based explanations are not empirically adequate. The omission of subjects should then be viewed more as a performance phenomenon (see also Bloom 1993). Another argument for the influence of performance factors is the fact that articles of subjects are left out more often than articles of objects (31% vs. 18%, see Gerken 1991: 440). As Bloom notes, no subject article-drop parameter has been proposed so far. If we explain this phenomenon as a performance phenomenon, then it is also plausible to assume that the omittance of complete subjects is due to performance issues.

Gerken (1991) shows that the metrical properties of utterances also play a role: in experiments where children had to repeat sentences, they omitted the subject/article of the subject more often than the object/article of the object. Here, it made a difference whether the intonation pattern was iambic (weak-strong) or trochaic (strong-weak). It can even be observed with individual words that children leave out weak syllables at the beginning of words more often than at the end of the word. Thus, it is more probable that "giRAFFE" is reduced to "RAFFE" than "MONkey" to "MON". Gerken assumes the following for the metrical structure of utterances:


#### 16 Language acquisition

Subject pronouns in English are sentence-initial and form a iambic foot with the following strongly emphasized verb as in (5a). Object pronouns, however, can form the weak syllable of a trochaic foot as in (5b).

	- b. the DOG + KISSED her
	- c. PETE + KISSED the + DOG

Furthermore, articles in iambic feet as in the object of (5a) and the subject of (5b) are omitted more often than in trochaic feet such as with the object of (5c).

It follows from this that there are multiple factors that influence the omission of elements and that one cannot simply take the behavior of children as evidence for switching between two grammars.

Apart from what has been discussed so far, the pro-drop parameter is of interest for another reason: there is a problem when it comes to setting parameters. The standard explanation is that learners identify that a subject must occur in all English sentences, which is suggested by the appearance of expletive pronouns in the input.

As discussed on page 533, there is no relation between the pro-drop property and the presence of expletives in a language. Since the pro-drop property does not correlate with any of the other putative properties either, only the existence of subject-less sentences in the input constitutes decisive evidence for setting a parameter. The problem is that there are grammatical utterances where there is no visible subject. Examples of this are imperatives such as (6), declaratives with a dropped subject as in (7a) and even declarative sentences without an expletive such as the example in (7b) found by Valian (1991: 32) in the New York Times.

	- b. Show me your toy!
	- b. Seems like she always has something twin-related perking.

The following title of a Nirvana song also comes from the same year as Valian's article:

(8) Smells like Teen Spirit.

Teen Spirit refers to a deodorant and *smell* is a verb that, both in German and English, requires a referential subject but can also be used with an expletive *it* as subject. The usage that Kurt Cobain had in mind cannot be reconstructed<sup>4</sup> , independent of the intended meaning, however, the subject in (8) is missing. Imperatives do occur in the input children have and are therefore relevant for acquisition. Valian (1991: 33) says the following about them:

<sup>4</sup> See http://de.wikipedia.org/wiki/Smells\_Like\_Teen\_Spirit. 2018-02-20.

What is acceptable in the adult community forms part of the child's input, and is also part of what children must master. The utterances that I have termed "acceptable" are not grammatical in English (since English does not have pro subjects, and also cannot be characterized as a simple VP). They lack subjects and therefore violate the extended projection principle (Chomsky 1981a), which we are assuming.

Children are exposed to fully grammatical utterances without subjects, in the form of imperatives. They are also exposed to acceptable utterances which are not fully grammatical, such as [(7a)], as well as forms like, "Want lunch now?" The American child must grow into an adult who not only knows that overt subjects are grammatically required, but also knows when subjects can acceptably be omitted. The child must not only acquire the correct grammar, but also master the discourse conditions that allow relaxation of the grammar. (Valian 1991: 33)

This passage turns the relations on their head: we cannot conclude from the fact that a particular grammatical theory is not compatible with certain data, that these data should not be described by this theory, instead we should modify the incompatible grammar or, if this is not possible, we should reject it. Since utterances with imperatives are entirely regular, there is no reason to categorize them as utterances that do not follow grammatical rules. The quotation above represents a situation where a learner has to acquire two grammars: one that corresponds to the innate grammar and a second that partially suppresses the rules of innate grammar and also adds some additional rules.

The question we can pose at this point is: how does a child distinguish which of the data it hears are relevant for which of the two grammars?

Fodor (1998a: 347) pursues a different analysis that does not suffer from many of the aforementioned problems. Rather than assuming that learners try to find a correct grammar among a billion others, she instead assumes that children work with a single grammar that contains all possibilities. She suggests using parts of trees (*treelets*) rather than parameters. These treelets can also be underspecified and in extreme cases, a treelet can consist of a single feature (Fodor 1998b: 6). A language learner can deduce whether a language has a given property from the usage of a particular treelet. As an example, she provides a VP treelet consisting of a verb and a prepositional phrase. This treelet must be used for the analysis of the VP occurring in *Look at the frog*. Similarly, the analysis of an interrogative clause with a fronted *who* would make use of a treelet with a *wh*-NP in the specifier of a complementizer phrase (see Figure 3.7 on page 99). In Fodor's version of Principles and Parameters Theory, this treelet would be the parameter that licenses *wh*movement in (overt) syntax. Fodor assumes that there are defaults that allow a learner to parse a sentence even when no or very few parameters have been set. This allows one to learn from utterances that one would have not otherwise been able to use since there would have been multiple possible analyses for them. Assuming a default can lead to misanalyses, however: due to a default value, a second parameter could be set because an utterance was analyzed with a treelet t<sup>1</sup> and t<sup>3</sup> , for example, but t<sup>1</sup> was not suited to the particular language in question and the utterance should have instead been analyzed with the non-default treelet t<sup>2</sup> and the treelet t17. In this acquisition model, there must therefore be the possibility to correct wrong decisions in the parameter setting process.

Fodor therefore assumes that there is a frequency-based degree of activation for parameters (p. 365): treelets that are often used in analyses have a high degree of activation, whereas those used less often have a lower degree of activation. In this way, it is not necessary to assume a particular parameter value while excluding others.

Furthermore, Fodor proposes that parameters should be structured hierarchically, that is, only if a parameter has a particular value does it then make sense to think about specific other parameter values.

Fodor's analysis is – as she herself notes (Fodor 2001: 385) – compatible with theories such as HPSG and TAG. Pollard & Sag (1987: 147) characterize UG as the conjunction of all universally applicable principles:

(9) UG = P<sup>1</sup> ∧ P<sup>2</sup> ∧ … ∧ P

As well as principles that hold universally, there are other principles that are specific to a particular language or a class of languages. Pollard & Sag give the example of the constituent ordering principle that only holds for English. English can be characterized as follows if one assumes that P+1–P are language-specific principles, L1–L a complete list of lexical entries and R1–R a list of dominance schemata relevant for English.

(10) English = P<sup>1</sup> ∧ P<sup>2</sup> ∧ … ∧ P ∧ (L<sup>1</sup> ∨ … ∨ L ∨ R<sup>1</sup> ∨ … ∨ R )

In Pollard & Sag's conception, only those properties of language that equally hold for all languages are part of UG. Pollard & Sag do not count the dominance schemata as part of this. However, one can indeed also describe UG as follows:

$$\text{(11)}\quad \text{UG} = \text{P}\_1 \land \text{P}\_2 \land \dots \land \text{P}\_n \land (\text{R}\_{\text{en-1}} \lor \dots \lor \text{R}\_{\text{en-}q} \lor \text{R}\_{\text{de-1}} \lor \dots \lor \text{R}\_{\text{de-r}} \lor \dots))$$

P1–P are, as before, universally applicable principles and Ren-1–Ren are the (core) dominance schemata of English and Rde-1–Rde are the dominance schemata in German. The dominance schemata in (11) are combined by means of disjunctions, that is, not every disjunct needs to have a realization in a specific language. Principles can make reference to particular properties of lexical entries and rule out certain phrasal configurations. If a language only contains heads that are marked for final-position in the lexicon, then grammatical rules that require a head in initial position as their daughter can never be combined with these heads or their projections. Furthermore, theories with a type system are compatible with Fodor's approach to language acquisition because constraints can easily be underspecified. As such, constraints in UG do not have to make reference to all properties of grammatical rules: principles can refer to feature values, the language-specific values themselves do not have to already be contained in UG. Similarly, a supertype describing multiple dominance schemata that have similar but language-specific instantiations can also be part of UG, however the language-specific details remain open and are then deduced by the learner upon parsing (see Ackerman & Webelhuth 1998: Section 9.2). The differences in activation assumed by Fodor can be captured by weighting the constraints: the dominance schemata Ren-1–Ren etc. are sets of featurevalue pairs as well as path equations. As explained in Chapter 15, weights can be added to such constraints and also to sets of constraints. In Fodor's acquisition model, given a German input, the weights for the rules of English would be reduced and those for

the German rules would be increased. Note that in Pollard & Sag's acquisition scenario, there are no triggers for parameter setting unlike in Fodor's model. Furthermore, properties that were previously disjunctively specified as part of UG will now be acquired directly. Using the treelet t<sup>17</sup> (or rather a possibly underspecified dominance schema), it is not the case that the value '+' is set for a parameter P<sup>5</sup> but rather the activation potential of t<sup>17</sup> is increased such that t<sup>17</sup> will be prioritized for future analyses.

# **16.2 Principles and the lexicon**

A variant of the UG-driven theory of language acquisition would be to assume that principles are so general that they hold for all languages and individual languages simply differ with regard to their lexicon. Principles then refer to properties of combined entities. Parameters therefore migrate from principles into the lexicon (Chomsky 1999: 2). See Mensching & Remberger (2011) for a study of Romance languages in this model and Son & Svenonius (2008: 395) for an analysis of Snyder's examples that were discussed in the previous section.

At this point, one can observe an interesting convergence in these approaches: most of the theories discussed here assume a very general structure for the combination of heads with their arguments. For example, in Categorial Grammar and the Minimalist Program, these are always binary functor-argument combinations. The way in which constituents can be ordered in a particular language depends on the lexical properties of the combined elements.

The question that is being discussed controversially at present is whether the spectrum of lexical properties is determined by UG (Chomsky 2007: 6–7) and whether all areas of the language can be described with the same general combinatorial possibilities (see Section 21.10 on phrasal constructions).

In Section 16.1, I have shown what theories of acquisition assuming innate language specific knowledge can look like and also that variants of such acquisition theories are compatible with all the theories of grammar we have discussed. During this discussion, one should bear in mind the question of whether it makes sense at all to assume that English children use parts of a Chinese grammar during some stages of their acquisition process (as suggested by Yang 2004: 453), or whether the relevant phenomena can be explained in different ways. In the following, I will present some alternative approaches that do not presuppose innate language specific knowledge, but instead assume that language can simply be acquired from the input. The following section will deal with pattern-based approaches and Section 16.4 will discuss the lexically-oriented variant of input-based language acquisition.

# **16.3 Pattern-based approaches**

Chomsky (1981a: 7–8) proposed that languages can be divided into a core area and a periphery. The core contains all regular aspects of language. The core grammar of a language is seen as an instantiation of UG. Idioms and other irregular parts of language are

#### 16 Language acquisition

then part of the periphery. Critics of the Principles & Parameters model have pointed out that idiomatic and irregular constructions constitute a relatively large part of our language and that the distinction, both fluid and somewhat arbitrary, is only motivated theory-internally (Jackendoff 1997: Chapter 7; Culicover 1999; Ginzburg & Sag 2000: 5; Newmeyer 2005: 48; Kuhn 2007: 619). For example, it is possible to note that there are interactions between various idioms and syntax (Nunberg, Sag & Wasow 1994). Most idioms in German with a verbal component allow the verb to be moved to initial position (12b), some allow that parts of idioms can be fronted (12c) and some can undergo passivization (12d).

	- b. Er he macht makes ihm him den the Garaus. garaus 'He finishes him off.'
	- c. In Amerika sagte man der Kamera nach, die größte Kleinbildkamera der Welt zu sein. Sie war laut Schleiffer am Ende der Sargnagel der Mühlheimer Kameraproduktion.

*Den* the *Garaus* garaus *machte* made ihr her die the Diskussion discussion um around die the Standardisierung standardization des of.the 16-Millimeter-Filmformats, 16-millimeter-film.format an at dessen whose Ende end die the DIN-Norm DIN-norm 19022 19022 (Patrone cartridge mit with Spule coil für for 16-Millimeter-Film) 16-millimeter-film stand, stood die that im in März March 1963 1963 zur to.the Norm norm wurde.<sup>5</sup>

became

'In America, one says that this camera was the biggest compact camera in the world. According to Schleiffer, it was the last nail in the coffin for camera production in Mühlheim. What finished it off was the discussion about standardizing the 16 millimeter format, which resulted in the DIN-Norm 19022 (cartridge with coil for 16 millimeter film) that became the norm in March 1963.'

d. in in Heidelberg Heidelberg wird are "parasitären parasitic Elementen" elements unter among den the Professoren professors *der* the *Garaus gemacht*<sup>6</sup>

garaus made

'In Heidelberg, "parasitic elements" among professors are being killed off.'

It is assumed that the periphery and lexicon are not components of UG (Chomsky 1986b: 150–151; Fodor 1998a: 343) but rather are acquired using other learning methods – namely inductively directly from the input. The question posed by critics is now why these

<sup>5</sup> Frankfurter Rundschau, 28.06.1997, p. 2.

<sup>6</sup>Mannheimer Morgen, 28.06.1999, Sport; Schrauben allein genügen nicht.

methods should not work for regular aspects of the language as well (Abney 1996: 20; Goldberg 2003a: 222; Newmeyer 2005: 100; Tomasello 2006c: 36; 2006b: 20): the areas of the so-called 'core' are by definition more regular then components of the periphery, which is why they should be easier to learn.

Tomasello (2000, 2003) has pointed out that a Principles & Parameters model of language acquisition is not compatible with the observable facts. The Principles and Parameters Theory predicts that children should no longer make mistakes in a particular area of grammar once they have set a particular parameter correctly (see Chomsky 1986b: 146, Radford 1990: 21–22 and Lightfoot 1997: 175). Furthermore, it is assumed that a parameter is responsible for very different areas of grammar (see the discussion of the pro-drop parameter in Section 16.1). When a parameter value is set, then there should be sudden developments with regard to a number of phenomena (Lightfoot 1997: 174). This is, however, not the case. Instead, children acquire language from utterances in their input and begin to generalize from a certain age. Depending on the input, they can reorder certain auxiliaries and not others, although movement of auxiliaries is obligatory in English.<sup>7</sup> One argument put forward against these kinds of input-based theories is that children produce utterances that cannot be observed to a significant frequency in the input. One much discussed phenomenon of this kind are so called *root infinitives* (RI) or *optional infinitives* (OI) (Wexler 1998). These are infinitive forms that can be used in non-embedded clauses (*root sentences*) instead of a finite verb. Optional infinitives are those where children use both a finite (13a) and non-finite (13b) form (Wexler 1998: 59):

	- b. Mary like ice cream.

Wijnen, Kempen & Gillis (2001: 656) showed that Dutch children use the order object infinitive 90 % of the time during the two-word phase although these orders occur in less than 10 % of their mother's utterances that contained a verb. Compound verb forms, e.g., with a modal in initial position as in (14) that contain another instance of this pattern only occurred in 30 % of the input containing a verb (Wijnen, Kempen & Gillis 2001: 647).

(14) Willst want du you Brei porridge essen? eat 'Do you want to eat porridge?'

At first glance, there seems to be a discrepancy between the input and the child's utterances. However, this deviation could also be explained by an utterance-final bias in learning (Wijnen et al. 2001; Freudenthal, Pine & Gobet 2006). A number of factors can be made responsible for the salience of verbs at the end of an utterance: 1) restrictions of the infant brain. It has been shown that humans (both children and adults) forget words during the course of an utterance, that is, the activation potential decreases. Since the cognitive capabilities of small children are restricted, it is clear why elements at the end of an utterance have an important status. 2) Easier segmentation at the end of an

<sup>7</sup>Here, Yang's suggestion to combine grammars with a particular probability does not help since one would have to assume that the child uses different grammars for different auxiliaries, which is highly unlikely.

#### 16 Language acquisition

utterance. At the end of an utterance, part of the segmentation problem for hearers disappears: the hearer first has to divide a sequence of phonemes into individual words before he can understand them and combine them to create larger syntactic entities. This segmentation is easier at the end of an utterance since the word boundary is already given by the end of the utterance. Furthermore according to Wijnen, Kempen & Gillis (2001: 637), utterance-final words have an above average length and do bear a pitch accent. This effect occurs more often in language directed at children.

Freudenthal, Pine, Aguado-Orea & Gobet (2007) have modeled language acquisition for English, German, Dutch, and Spanish. The computer model could reproduce differences between these languages based on input. At first glance, it is surprising that there are even differences between German and Dutch and between English and Spanish with regard to the use of infinitives as German and Dutch have a very similar syntax (SOV+V2). Similarly, English and Spanish are both languages with SVO order. Nevertheless, children learning English make OI mistakes, whereas this is hardly ever the case for children learning Spanish.

Freudenthal, Pine, Aguado-Orea & Gobet (2007) trace the differences in error frequencies back to the distributional differences in each language: the authors note that 75 % of verb-final utterances<sup>8</sup> in English consist of compound verbs (finite verb + dependent verb, e.g., *Can he go?*), whereas this is only the case 30 % of the time in Dutch.

German also differs from Dutch with regard to the number of utterance-final infinitives. Dutch has a progressive form that does not exist in Standard German:

(15) Wat what ben are je you aan on het it doen? do.inf 'What are you doing?'

Furthermore, verbs such as *zitten* 'to sit', *lopen* 'to run' and *staan* 'to stand' can be used in conjunction with the infinitive to describe events happening in that moment:

(16) Zit sit je you te to spelen? play 'Are you sitting and playing?'

Furthermore, there is a future form in Dutch that is formed with *ga* 'go'. These factors contribute to the fact that Dutch has 20 % more utterance-final infinitives than German. Spanish differs from English in that it has object clitics:

(17) (Yo) I Lo it quiero. want 'I want it.'

Short pronouns such as *lo* in (17) are realized in front of the finite verb so that the verb appears in final position. In English, the object follows the verb, however. Furthermore,

<sup>8</sup> For English, the authors only count utterances with a subject in third person singular since it is only in these cases that a morphological difference between the finite and infinitive form becomes clear.

there are a greater number of compound verb forms in the English input (70 %) than in Spanish (25 %). This is due to the higher frequency of the progressive in English and the presence of *do*-support in question formation.

The relevant differences in the distribution of infinitives are captured correctly by the proposed acquisition model, whereas alternative approaches that assume that children possess an adult grammar but use infinitives instead of the finite forms cannot explain the gradual nature of this phenomenon.

Freudenthal, Pine & Gobet (2009) could even show that input-based learning is superior to other explanations for the distribution of NPs and infinitives. They can explain why this order is often used with a modal meaning (e.g., *to want*) in German and Dutch (Ingram & Thompson 1996). In these languages, infinitives occur with modal verbs in the corresponding interrogative clauses. Alternative approaches that assume that the linguistic structures in question correspond to those of adults and only differ from them in that a modal verb is not pronounced cannot explain why not all utterances of object and verb done by children learning German and Dutch do have a modal meaning. Furthermore, the main difference to English cannot be accounted for: in English, the number of modal meanings is considerably less. Input-based models predict this exactly since English can use the dummy verb *do* to form questions:

	- b. Can he help you?

If larger entities are acquired from the end of an utterance, then there would be both a modal and non-modal context for *he help you*. Since German and Dutch normally do not use the auxiliary *tun* 'do', the relevant endings of utterances are always associated with modals contexts. One can thereby explain why infinitival expressions have a modal meaning significantly more often in German and Dutch than in English.

Following this discussion of the arguments against input-based theories of acquisition, I will turn to Tomasello's pattern-based approach. According to Tomasello (2003: Section 4.2.1), a child hears a sentence such as (19) and realizes that particular slots can be filled freely (see also Dąbrowska (2001) for analogous suggestions in the framework of Cognitive Grammar).

	- b. Mommy is gone.

From these utterances, it is possible to derive so-called pivot schemata such as those in (20) into which words can then be inserted:

	- b. \_\_\_ gone → mommy/juice gone

In this stage of development (22 months), children do not generalize using these schemata, these schemata are instead construction islands and do not yet have any syntax (Tomasello et al. 1997). The ability to use previously unknown verbs with a subject and

an object in an SVO order is acquired slowly between the age of three and four (Tomasello 2003: 128–129). More abstract syntactic and semantic relations only emerge with time: when confronted with multiple instantiations of the transitive construction, the child is then able to generalize:

	- b. [<sup>S</sup> [NP The man/the woman] likes [NP the dog/the rabbit/it]].
	- c. [<sup>S</sup> [NP The man/the woman] kicks [NP the dog/the rabbit/it]].

According to Tomasello (2003: 107), this abstraction takes the form [Sbj TrVerb Obj]. Tomasello's approach is immediately plausible since one can recognize how abstraction works: it is a generalization about reoccurring patterns. Each pattern is then assigned a semantic contribution. These generalizations can be captured in inheritance hierarchies (see page 211) (Croft 2001: 26). The problem with this kind of approach, however, is that it cannot explain the interaction between different areas of phenomena in the language: it is possible to represent simple patterns such as the use of transitive verbs in (21), but transitive verbs interact with other areas of the grammar such as negation. If one wishes to connect the construction one assumes for the negation of transitive verbs with the transitive construction, then one arrives at a problem since this is not possible in inheritance hierarchies.

(22) The woman did not kick the dog.

The problem is that the transitive construction has a particular semantic contribution but that negated transitive construction has the opposite meaning. The values of sem features would therefore be contradictory. There are technical tricks to avoid this problem, however, since there are a vast number of these kinds of interactions between syntax and semantics, this kind of technical solution will result in something highly implausible from a cognitive perspective (Müller 2006, 2007b,a, 2010b, Müller & Wechsler 2014a). For discussion of Croft's analysis, see Section 21.4.1.

At this point, proponents of pattern-based analyses might try and argue that these kinds of problems are only the result of a poor/inadequate formalization and would rather do without a formalization (Goldberg 2009: Section 5). However, this does not help here as the problem is not the formalization itself, rather the formalization allows one to see the problem more clearly.

An alternative to an approach built entirely on inheritance is a TAG-like approach that allows one to insert syntactic material into phrasal constructions. Such a proposal was discussed in Section 10.6.3. Bergen & Chang (2005: 170) working in Embodied Construction Grammar suggest an Active-Ditransitive Construction with the form [RefExpr Verb RefExpr RefExpr], where RefExpr stands for a referential expression and the first RefExpr and the verb may be non-adjacent. In this way, it is possible to analyze (23a,b), while ruling out (23c):

	- b. Mary happily tossed me a drink.
	- c. \* Mary tossed happily me a drink.

While the compulsory adjacency of the verb and the object correctly predicts that (23c) is ruled out, the respective constraint also rules out coordinate structures such as (24):

(24) Mary tossed me a juice and Peter a water.

Part of the meaning of this sentence corresponds to what the ditransitive construction contributes to *Mary tossed Peter a water*. There is, however, a gap between *tossed* and *Peter*. Similarly, one can create examples where there is a gap between both objects of a ditransitive construction:

(25) He showed me and bought for Mary the book that was recommended in the Guardian last week.

In (25), *me* is not adjacent to *the book …*. It is not my aim here to request a coordination analysis. Coordination is a very complex phenomenon for which most theories do not have a straightforward analysis (see Section 21.6.2). Instead, I would simply like to point out that the fact that constructions can be realized discontinuously poses a problem for approaches that claim that language acquisition is exclusively pattern-based. The point is the following: in order to understand coordination data in a language, a speaker must learn that a verb which has its arguments somewhere in the sentence has a particular meaning together with these arguments. The actual pattern [Sbj V Obj1 Obj2] can, however, be interrupted in all positions. In addition to the coordination examples, there is also the possibility of moving elements out of the pattern either to the left or the right. In sum, we can say that language learners have to learn that there is a relation between functors and their arguments. This is all that is left of pattern-based approaches but this insight is also covered by the selection-based approaches that we will discuss in the following section.

A defender of pattern-based approaches could perhaps object that there is a relevant construction for (25) that combines all material. This means that one would have a construction with the form [Sbj V Obj1 Conj V PP Obj2]. It would then have to be determined experimentally or with corpus studies whether this actually makes sense. The generalization that linguists have found is that categories with the same syntactic properties can be coordinated (N, N, NP, V, V, VP, …). For the coordination of verbs or verbal projections, it must hold that the coordinated phrases require the same arguments:

	- b. Er he [kennt knows und and liebt] loves diese this Schallplatte. record
	- c. Er he [zeigt shows dem the Jungen] boy und and [gibt gives der the Frau] woman die the Punk-Rock-CD. punk rock CD
	- d. Er he [liebt loves diese this Schallplatte] record und and [schenkt gives ihr her ein a Buch]. book

In an approach containing only patterns, one would have to assume an incredibly large number of constructions and so far we are only considering coordinations that consist of exactly two conjuncts. However, the phenomenon discussed above is not only restricted to coordination of two elements. If we do not wish to abandon the distinction between competence and performance (see Chapter 15), then the number of conjuncts is not constrained at all (by the competence grammar):

(27) Er he [kennt, knows liebt loves und and verborgt] lends.out diese this Schallplatte. record

It is therefore extremely unlikely that learners have patterns for all possible cases in their input. It is much more likely that they draw the same kind of generalizations as linguists from the data occurring in their input: words and phrases with the same syntactic properties can be coordinated. If this turns out to be true, then all that is left for pattern-based approaches is the assumption of discontinuously realized constructions and thus a dependency between parts of constructions that states that they do not have to be immediately adjacent to one another. The acquisition problem is then the same as for selection-based approaches that will be the topic of the following section: what ultimately has to be learned are dependencies between elements or valences (see Behrens (2009: 439), the author reaches the same conclusion following different considerations).

# **16.4 Selection-based approaches**

I will call the alternative to pattern-based approaches *selection-based*. A selection-based approach has been proposed by Green (2011).

The generalizations about the pattern in (21) pertain to the valence class of the verb. In Categorial Grammar, the pattern [Sbj TrVerb Obj] corresponds to the lexical entry (s\np)/np (for the derivation of a sentence with this kind of lexical entry, see Figure 8.3 on page 249). A TAG tree for *likes* was given on page 423. Here, one can see quite clearly that lexical entries determine the structure of sentences in these models. Unlike patternbased approaches, these analyses allow enough room for semantic embedding: the lexical entries in Categorial Grammar can be combined with adjuncts, and elementary trees in TAG also allow for adjunction to the relevant nodes.

Now, we face the question of how the jump from a pivot schema to a lexical entry with an argument structure takes place. In Tomasello's approach, there is no break between them. Pivot schemata are phrasal patterns and [Sbj TrVerb Obj] is also a phrasal pattern. Both schemata have open slots into which certain elements can be inserted. In selectionbased approaches, the situation is similar: the elements that are fixed in the pivot schema are functors in the selection-based approach. Green (2011) proposes a theory of acquisition in HPSG that can do without UG. For the two-word phase, she assumes that *where's* is the head of an utterance such as (28) and that *where's* selects *Robin* as its argument.

(28) Where's Robin?

This means that, rather than assuming that there is a phrasal pattern *Where's* X? with an empty slot X for a person or thing, she assumes that there is a lexical entry *where's*, which contains the information that it needs to be combined with another constituent. What needs to be acquired is the same in each case: there is particular material that has to be combined with other material in order to yield a complete utterance.

In her article, Green shows how long-distance dependencies and the position of English auxiliaries can be acquired in later stages of development. The acquisition of grammar proceeds in a monotone fashion, that is, knowledge is added – for example, knowledge about the fact that material can be realized outside of the local context – and previous knowledge does not have to be revised. In her model, mistakes in the acquisition process are in fact mistakes in the assignment of lexical entries to valence classes. These mistakes have to be correctable.

In sum, one can say that all of Tomasello's insights can be applied directly to selection-based approaches and the problems with pattern-based approaches do not surface with selection-based approaches. It is important to point out explicitly once again here that the selection-based approach discussed here also is a construction-based approach. Constructions are just lexical and not phrasal. The important point is that, in both approaches, words and also more complex phrases are pairs of form and meaning and can be acquired as such.

In Chapter 21, we will discuss pattern-based approaches further and we will also explore areas of the grammar where phrasal patterns should be assumed.

# **16.5 Summary**

We should take from the preceding discussion that models of language acquisition that assume that a grammar is chosen from a large set of grammars by setting binary parameters are in fact inadequate. All theories that make reference to parameters have in common that they are purely hypothetical since there is no non-trivial set of parameters that all proponents of the model equally agree on. In fact there is not even a trivial one.

In a number of experiments, Tomasello and his colleagues have shown that, in its original form, the Principles & Parameters model makes incorrect predictions and that language acquisition is much more pattern-based than assumed by proponents of P&P analyses. Syntactic competence develops starting from verb islands. Depending on the frequency of the input, certain verbal constructions can be mastered even though the same construction has not yet been acquired with less frequent verbs.

The interaction with other areas of grammar still remains problematic for patternbased approaches: in a number of publications, it has been shown that the interaction of phenomena that one can observe in complex utterances can in fact not be explained with phrasal patterns since embedding cannot be captured in an inheritance hierarchy. This problem is not shared by selection-based approaches. All experimental results and insights of Tomasello can, however, be successfully extended to selectionbased approaches.

#### 16 Language acquisition

# **Further reading**

Meisel (1995) gives a very good overview of theories of acquisition in the Principles & Parameters model.

Adele Goldberg and Michael Tomasello are the most prominent proponents of Construction Grammar, a theory that explicitly tries to do without the assumption of innate linguistic knowledge. They published many papers and books about topics related to Construction Grammar and acquisition. The most important books probably are Goldberg (2006) and Tomasello (2003).

An overview of different theories of acquisition in German can be found in Klann-Delius (2008) an English overview is Ambridge & Lieven (2011).

# **17 Generative capacity and grammar formalisms**

In several of the preceding chapters, the complexity hierarchy for formal languages was mentioned. The simplest languages are so-called regular languages (Type-3), they are followed by those described as context-free grammars (Type-2), then those grammars which are context-sensitive (Type-1) and finally we have unrestricted grammars (Type-0) that create recursively enumerable languages, which are the most complicated class. In creating theories, a conscious effort was made to use formal means that correspond to what one can actually observe in natural language. This led to the abandonment of unrestricted Transformational Grammar since this has generative power of Type-0 (see page 86). GPSG was deliberately designed in such a way as to be able to analyze just the context-free languages and not more. In the mid-80s, it was shown that natural languages have a higher complexity than context-free languages (Shieber 1985, Culy 1985). It is now assumed that so-called *mildly context-sensitive* grammars are sufficient for analyzing natural languages. Researchers working on TAG are working on developing variants of TAG that fall into exactly this category. Similarly, it was shown for different variants of Stabler's *Minimalist Grammars* (see Section 4.6.4 and Stabler 2001, 2011b) that they have a mildly context-sensitive capacity (Michaelis 2001). Peter Hellwig's Dependency Unification Grammar is also mildly context-sensitive (Hellwig 2003: 595). LFG and HPSG, as well as Chomsky's theory in *Aspects*, fall into the class of Type-0 languages (Berwick 1982, Johnson 1988). The question at this point is whether it is an ideal goal to find a descriptive language that has exactly the same power as the object it describes. Carl Pollard (1997: 9) once said that it would be odd to claim that certain theories in physics were not adequate simply because they make use of tools from mathematics that are too powerful.<sup>1</sup> It is not the descriptive language that should constrain the theory but rather the theory contains the restrictions that must hold for the objects in question. This is the view that Chomsky (1981b: 277, 280) takes. Also, see Berwick (1982: Section 4), Kaplan & Bresnan (1982: Section 8) on LFG and Johnson (1988: Section 3.5) on the *Off-Line Parsability Constraint* in LFG and attribute-value grammars in general.

There is of course a technical reason to look for a grammar with the lowest level of complexity possible: we know that it is easier for computers to process grammars with

<sup>1</sup> If physicists required the formalism to constrain the theory:

Editor: Professor Einstein, I'm afraid we can't accept this manuscript of yours on general relativity. Einstein: Why? Are the equations wrong?

Editor: No, but we noticed that your differential equations are expressed in the first-order language of set theory. This is a totally unconstrained formalism! Why, you could have written down ANY set of differential equations! (Pollard 1997: 9)

#### 17 Generative capacity and grammar formalisms

lower complexity than more complex grammars. To get an idea about the complexity of a task, the so-called 'worst case' for the relevant computations is determined, that is, it is determined how long a program needs for an input of a certain length in the least favorable case to get a result for a grammar from a certain class. This begs the question if the worst case is actually relevant. For example, some grammars that allow discontinuous constituents perform less favorably in the worst case than normal phrase structure grammars that only allow for combinations of continuous strings (Reape 1991: Section 8). As I have shown in Müller (2004d), a parser that builds up larger units starting from words (a bottom-up parser) is far less efficient when processing a grammar assuming a verb movement analysis than is the case for a bottom-up parser that allows for discontinuous constituents. This has to do with the fact that verb traces do not contribute any phonological material and a parser cannot locate them without further machinery. It is therefore assumed that a verb trace exists in every position in the string and in most cases these traces do not contribute to an analysis of the complete input. Since the verb trace is not specified with regard to its valence information, it can be combined with any material in the sentence, which results in an enormous computational load. On the other hand, if one allows discontinuous constituents, then one can do without verb traces and the computational load is thereby reduced. In the end, the analysis using discontinuous constituents was eventually discarded for linguistic reasons (Müller 2005b,c, 2007a, 2023a), however, the investigation of the parsing behavior of both grammars is still interesting as it shows that worst case properties are not always informative.

I will discuss another example of the fact that language-specific restrictions can restrict the complexity of a grammar: Gärtner & Michaelis (2007: Section 3.2) assume that Stabler's Minimalist Grammars (see Section 4.6.4) with extensions for late adjunction and extraposition are actually more powerful than mildly context-sensitive. If one bans extraction from adjuncts (Frey & Gärtner 2002: 46) and also assumes the Shortest Move Constraint (see footnote 32 on page 165), then one arrives at a grammar that is mildly context-sensitive (Gärtner & Michaelis 2007: 178). The same is true of grammars with the Shortest Move Constraint and a constraint for extraction from specifiers.

Whether extraction takes place from a specifier or not depends on the organization of the particular grammar in question. In some grammars, all arguments are specifiers (Kratzer 1996: 120–123, also see Figure 18.4 on page 563). A ban on extraction from specifiers would imply that extraction out of arguments would be impossible. This is, of course, not true in general. Normally, subjects are treated as specifiers (also by Frey & Gärtner 2002: 44). It is often claimed that subjects are islands for extraction (see Grewendorf 1989: 35, 41; G. Müller 1996b: 220; 1998: 32, 163; Sabel 1999: 98; Fanselow 2001: 422). Several authors have noted, however, that extraction from subjects is possible in German (see Dürscheid 1989: 25; Haider 1993: 173; Pafel 1993; Fortmann 1996: 27; Borsley 1997: 320; Vogel & Steinbach 1998: 87; Ballweg 1997: 2066; Müller 1999b: 100– 101; De Kuthy 2002: 7). The following data are attested examples:

(1) a. [Von of den the übrigbleibenden left.over Elementen] elements scheinen seem [die the Determinantien determinants \_ ] die the wenigsten fewest Klassifizierungsprobleme classification.problems aufzuwerfen.<sup>2</sup> to.throw.up

'Of the remaining elements, the determinants seem to pose the fewest problems for classification.'

b. [Von of den the Gefangenen] prisoners hatte had eigentlich actually [keine none \_ ] die the Nacht night der of.the Bomben bombs überleben sollen.<sup>3</sup>

survive should

'None of the prisoners should have actually survived the night of the bombings.'

c. [Von of der the HVA] HVA hielten held sich refl [etwa around 120 120 Leute people \_ ] dort there in in ihren their Gebäuden buildings auf.<sup>4</sup>

part

'Around 120 people from the HVA stayed there inside their buildings.'


of.the economy

'Many of the fraction agreed with him that it is the buying power of citizens that needed to be increased, not the good spirits of the economy.'

f. [Vom from Erzbischof archbishop Carl Carl Theodor Theodor Freiherr Freiherr von from Dalberg] Dalberg gibt gives es it beispielsweise for.example [ein a Bild picture \_ ] im in.the Stadtarchiv.<sup>7</sup> city.archives 'For example, there is a picture of archbishop Carl Theodor Freiherr of Dalberg in the city archives.'

<sup>2</sup> In the main text of Engel (1970: 102).

<sup>3</sup> Bernhard Schlink, *Der Vorleser*, Diogenes Taschenbuch 22953, Zürich: Diogenes Verlag, 1997, p. 102. 4 Spiegel, 3/1999, p. 42.

<sup>5</sup> Frankfurter Rundschau, quoted from De Kuthy (2001: 52).

<sup>6</sup> taz, 16.10.2003, p. 5.

<sup>7</sup> Frankfurter Rundschau, quoted from De Kuthy (2002: 7).

g. [Gegen against die the wegen because.of Ehebruchs adultery zum to.the Tod death durch by Steinigen stoning verurteilte sentenced Amina Amina Lawal] Lawal hat has gestern yesterday in in Nigeria Nigeria [der the zweite second Berufungsprozess appeal.process \_ ] begonnen.<sup>8</sup> begun

'The second appeal process began yesterday against Amina Lawal, who was sentenced to death by stoning for adultery.'

h. [Gegen against diese this Kahlschlagspolitik] clear.cutting.politics finden happen derzeit at.the.moment bundesweit statewide [Proteste protests und and Streiks strikes \_ ] statt.<sup>9</sup> part

'At the moment, there are state-wide protests and strikes against this destructive politics.'

i. [Von of den the beiden, both die that hinzugestoßen joined sind], are hat has [einer one \_ ] eine a Hacke, pickaxe der the andere other einen a Handkarren.<sup>10</sup> handcart

'Of the two that joined, one had a pickaxe and the other a handcart.'


a hefty controversy

'Recently, there has been considerable controversy about the Chinese program by the Deutsche Welle.'

<sup>8</sup> taz, 28.08.2003, p. 2.

<sup>9</sup> Streikaufruf, Universität Bremen, 03.12.2003, p. 1.

<sup>10</sup>Haruki Murakami, *Hard-boiled Wonderland und das Ende der Welt*, suhrkamp taschenbuch, 3197, 2000, Translation by Annelie Ortmanns and Jürgen Stalph, p. 414.

<sup>11</sup>taz, 30.12.2004, p. 6.

<sup>12</sup>taz, 02.09.2005, p. 18.

<sup>13</sup>taz, 04.07.2005, p. 5.

<sup>14</sup>taz, 21.10.2008, p. 12.

This means that a ban on extraction from specifiers cannot hold for German. As such, it cannot be true for all languages.

We have a situation that is similar to the one with discontinuous constituents: since it is not possible to integrate the ban on extraction discussed here into the grammar formalism, it is more powerful than what is required for describing natural language. However, the restrictions in actual grammars – in this case, the restrictions on extraction from specifiers in the relevant languages – ensure that the respective language-specific grammars have a mildly context-sensitive capacity.

# **18 Binary branching, locality, and recursion**

This chapter discusses three points: section 18.1 deals with the question of whether all linguistic structures should be binary branching or not. Section 18.2 discusses the question what information should be available for selection, that is, whether governing heads can access the internal structure of selected elements or whether everything should be restricted to local selection. Finally, Section 18.3 discusses recursion and how/whether it is captured in the different grammar theories that are discussed in this book.

# **18.1 Binary branching**

We have seen that the question of the kind of branching structures assumed has received differing treatments in various theories. Classical X theory assumes that a verb is combined with all its complements. In later variants of GB, all structures are strictly binary branching. Other frameworks do not treat the question of branching in a uniform way: there are proposals that assume binary branching structures and others that opt for flat structures.

Haegeman (1994: Section 2.5) uses learnability arguments (rate of acquisition, see Section 13.2 on this point). She discusses the example in (1) and claims that language learners have to choose one of eight structures if flat-branching structures can occur in natural language. If, on the other hand, there are only binary-branching structures, then the sentence in (1) cannot have the structures in Figure 18.1 to start with, and therefore a learner would not have to rule out the corresponding hypotheses.

(1) Mummy must leave now.

Mummy must leave now Mummy must leave now Mummy must leave now

Figure 18.1: Structures with partial flat-branching

However, Haegeman (1994: 88) provides evidence for the fact that (1) has the structure in (2):

#### (2) [Mummy [must [leave now]]]

The relevant tests showing this include elliptical constructions, that is, the fact that it is possible to refer to the constituents in (2) with pronouns. This means that there is actually evidence for the structure of (1) that is assumed by linguists and we therefore do not have to assume that it is just hard-wired in our brains that only binary-branching structures are allowed. Haegeman (1994: 143) mentions a consequence of the binary branching hypothesis: if all structures are binary-branching, then it is not possible to straightforwardly account for sentences with ditransitive verbs in X theory. In X theory, it is assumed that a head is combined with all its complements at once (see Section 2.5). So in order to account for ditransitive verbs in X theory, an empty element (little *v*) has to be assumed (see Section 4.1.4).

It should have become clear in the discussion of the arguments for the Poverty of the Stimulus in Section 13.8 that the assumption that only binary-branching structures are possible is part of our innate linguistic knowledge is nothing more than pure speculation. Haegeman offers no kind of evidence for this assumption. As shown in the discussions of the various theories we have seen, it is possible to capture the data with flat structures. For example, it is possible to assume that, in English, the verb is combined with its complements in a flat structure (Pollard & Sag 1994: 39). There are sometimes theory-internal reasons for deciding for one kind of branching or another, but these are not always applicable to other theories. For example, Binding Theory in GB theory is formulated with reference to dominance relations in trees (Chomsky 1981a: 188). If one assumes that syntactic structure plays a crucial role for the binding of pronouns (see page 90), then it is possible to make assumptions about syntactic structure based on the observable binding relations (so also Section 4.1.4). Binding data have, however, received a very different treatment in various theories. In LFG, constraints on f-structure are used for Binding Theory (Dalrymple 1993), whereas Binding Theory in HPSG operates on argument structure lists (valence information that are ordered in a particular way, see Section 9.1.1).

The opposite of Haegeman's position is the argumentation for flat structures put forward by Croft (2001: Section 1.6.2). In his Radical Construction Grammar FAQ, Croft observes that a phrasal construction such as the one in (3a) can be translated into a Categorial Grammar lexical entry like (3b).

	- b. VP/NP

He claims that a disadvantage of Categorial Grammar is that it only allows for binarybranching structures and yet there exist constructions with more than two parts (p. 49). The exact reason why this is a problem is not explained, however. He even acknowledges himself that it is possible to represent constructions with more than two arguments in Categorial Grammar. For a ditransitive verb, the entry in Categorial Grammar of English would take the form of (4):

#### (4) ((s\np)/np)/np

If we consider the elementary trees for TAG in Figure 18.2, it becomes clear that it is equally possible to incorporate semantic information into a flat tree and a binarybranching tree. The binary-branching tree corresponds to a Categorial Grammar deriva-

Figure 18.2: Flat and binary-branching elementary trees

tion. In both analyses in Figure 18.2, a meaning is assigned to a head that occurs with a certain number of arguments. Ultimately, the exact structure required depends on the kinds of restrictions on structures that one wishes to formulate. In this book, such restrictions are not discussed, but as explained above some theories model binding relations with reference to tree structures. Reflexive pronouns must be bound within a particular local domain inside the tree. In theories such as LFG and HPSG, these binding restrictions are formulated without any reference to trees. This means that evidence from binding data for one of the structures in Figure 18.2 (or for other tree structures) constitutes nothing more than theory-internal evidence.

Another reason to assume trees with more structure is the possibility to insert adjuncts on any node. In Chapter 9, an HPSG analysis for German that assumes binary-branching structures was proposed. With this analysis, it is possible to attach an adjunct to any node and thereby explain the free ordering of adjuncts in the middle field:

	- b. [weil] because der the Mann man der the Frau woman *gestern* yesterday das the Buch book gab gave
	- c. [weil] because der the Mann man *gestern* yesterday der the Frau woman das the Buch book gab gave
	- d. [weil] because *gestern* yesterday der the Mann man der the Frau woman das the Buch book gab gave

This analysis is not the only one possible, however. One could also assume an entirely flat structure where arguments and adjuncts are dominated by one node. Kasper (1994) suggests this kind of analysis in HPSG (see also Section 5.1.5 for GPSG analyses that make use of metarules for the introduction of adjuncts). Kasper requires complex relational constraints that create syntactic relations between elements in the tree and also compute the semantic contribution of the entire constituent using the meaning of both the verb and the adjuncts. The analysis with binary-branching structures is simpler than those with complex relational constraints and – in the absence of theory-external evidence for flat structures – should be preferred to the analysis with flat structures. At this point, one could object that adjuncts in English cannot occur in all positions between arguments and therefore the binary-branching Categorial Grammar analysis and the TAG analysis in Figure 18.2 are wrong. This is not correct, however, as it is the specification of adjuncts with regard to the adjunction site that is crucial in Categorial Grammar. An adverb has the category (s\np)\(s\np) or (s\np)/(s\np) and can therefore only be combined with constituents that correspond to the VP node in Figure 18.2. In the same way, an elementary tree for an adverb in TAG can only attach to the VP node (see Figure 12.3 on page 421). For the treatment of adjuncts in English, binary-branching structures therefore do not make any incorrect predictions.

# **18.2 Locality**

The question of local accessibility of information has been treated in various ways by the theories discussed in this book. In the majority of theories, one tries to make information about the inner workings of phrases inaccessible for adjacent or higher heads, that is, *glaubt* 'believe' in (6) selects a sentential argument but it cannot "look inside" this sentential argument.

	- b. Karl Karl glaubt, believes dass that seine his Schwester sister morgen tomorrow kommt. comes

Thus for example, *glauben* cannot enforce that the subject of the verb has to begin with a consonant or that the complementizer has to be combined with a verbal projection starting with an adjunct. In Section 1.5, we saw that it is a good idea to classify constituents in terms of their distribution and independent of their internal structure. If we are talking about an NP box, then it is not important what this NP box actually contains. It is only of importance that a given head wants to be combined with an NP with a particular case marking. This is called *locality of selection*.

Various linguistic theories have tried to implement locality of selection. The simplest form of this implementation is shown by phrase structure grammars of the kind discussed in Chapter 2. The rule in (17) on page 59, repeated here as (7), states that a ditransitive verb can occur with three noun phrases, each with the relevant case:

(7) S → NP(Per1,Num1,nom) NP(Per2,Num2,dat) NP(Per3,Num3,acc) V(Per1,Num1,ditransitive)

Since the symbols for NPs do not have any further internal structure, the verb cannot require that there has to be a relative clause in an NP, for example. The internal properties of the NP are not visible to the outside. We have already seen in the discussion in Chapter 2, however, that certain properties of phrases have to be outwardly visible. This was the information that was written on the boxes themselves. For noun phrases, at least information about person, number and case are required in order to correctly capture their relation to a head. The gender value is important in German as well, since adverbial phrases such as *einer nach dem anderen* 'one after the other' have to agree in gender with the noun they refer to (see example (12) on page 517). Apart from that, information about the length of the noun phrases is required, in order to determine their order in a clause. Heavy constituents are normally ordered after lighter ones, and are also often extraposed (cf. Behaghel's *Gesetz der wachsenden Glieder* 'Law of increasing constituents' (1909: 139; 1930: 86)).

Theories that strive to be as restrictive as possible with respect to locality therefore have to develop mechanisms that allow one to only access information that is required to explain the distribution of constituents. This is often achieved by projecting certain properties to the mother node of a phrase. In X theory, the part of speech a head belongs to is passed up to the maximal projection: if the head is an N, for example, then the maximal projection is an NP. In GPSG, HPSG and variants of CxG, there are Head Feature Principles responsible for the projection of features. Head Feature Principles ensure that an entire group of features, so-called head features, are present on the maximal projection of a head. Furthermore, every theory has to be capable of representing the fact that a constituent can lack one of its parts and this part is then realized via a long-distance dependency in another position in the clause. As previously discussed on page 307, there are languages in which complementizers inflect depending on whether their complement is missing a constituent or not. This means that this property must be somehow accessible. In GPSG, HPSG and variants of CxG, there are additional groups of features that are present at every node between a filler and a gap in a long-distance dependency. In LFG, there is f-structure instead. Using Functional Uncertainty, one can look for the position in the f-structure where a particular constituent is missing. In GB theory, movement proceeds cyclically, that is, an element is moved into the specifier of CP and can be moved from there into the next highest CP. It is assumed in GB theory that heads can look inside their arguments, at least they can see the elements in the specifier position. If complementizers can access the relevant specifier positions, then they can determine whether something is missing from an embedded phrase or not. In GB theory, there was also an analysis of case assignment in infinitive constructions in which the case-assigning verb governs into the embedded phrase and assigns case to the element in SpecIP. Figure 18.3 shows the relevant structure taken from Haegeman (1994: 170). Since the Case Principle is formulated in such a way that only finite I can assign

Figure 18.3: Analysis of the AcI construction with *Exceptional Case Marking*

case to the subject (cf. page 110), *him* does not receive case from I. Instead, it is assumed that the verb *believe* assigns case to the subject of the embedded infinitive.

Verbs that can assign case across phrase boundaries are referred to as ECM verbs, where ECM stands for *Exceptional Case Marking*. As the name suggests, this instance of case assignment into a phrase was viewed as an exception. In newer versions of the theory (e.g., Kratzer 1996: 120–123), all case assignment is to specifier positions. For example, the Voice head in Figure 18.4 on the next page assigns accusative to the DP in the specifier of VP. Since the Voice head governs into the VP, case assignment to a runof-the-mill object in this theory is an instance of exceptional case assignment as well. The same is true in Adger's version of Minimalism, which was discussed in Chapter 4: Adger (2010) argues that his theory is more restrictive than LFG or HPSG since it is only one feature that can be selected by a head, whereas in LFG and HPSG complex feature bundles are selected. However, the strength of this kind of locality constraint is weakened by the operation Agree, which allows for nonlocal feature checking. As in Kratzer's proposal, case is assigned nonlocally by little *v* to the object inside the VP (see Section 4.1.5.2).

Adger discusses PP arguments of verbs like *depend* and notes that these verbs need specific PPs, that is, the form of the preposition in the PP has to be selectable. While

Figure 18.4: Analysis of structures with a transitive verb following Kratzer

this is trivial in Dependency Grammar, where the preposition is selected right away, the respective information is projected in theories like HPSG and is then selectable at the PP node. However, this requires that the governing verb can determine at least two properties of the selected element: its part of speech and the form of the preposition. This is not possible in Adger's system and he left this for further research. Of course it would be possible to assume an onP (a phrasal projection of *on* that has the category 'on'). Similar solutions have been proposed in Minimalist theories (see Section 4.6.1 on functional projections), but such a solution would obviously miss the generalization that all prepositional phrases have something in common, which would not be covered in a system with atomic categories that are word specific.

In theories such as LFG and HPSG, case assignment takes place locally in constructions such as those in (8):

	- b. Ich I halte hold ihn him für for einen a.acc Lügner. liar 'I take him to be a liar.'
	- c. Er he scheint seems ein a.nom Lügner liar zu to sein. be 'He seems to be a liar.'
	- d. Er he fischt fishes den the.acc Teich pond leer. empty 'He fishes (in) the pond (until it is) empty.'

Although *him*, *ihn* 'him', *er* 'he' and *den Teich* 'the pond' are not semantic arguments of the finite verbs, they are syntactic arguments (they are raised) and can therefore be assigned case locally. See Bresnan (1982a: 348–349 and Section 8.2) and Pollard & Sag (1994: Section 3.5) for an analysis of raising in LFG and HPSG respectively. See Meurers (1999c), Przepiórkowski (1999b), and Müller (2007a: Section 17.4) for case assignment in HPSG and for its interaction with raising.

There are various phenomena that are incompatible with strict locality and require the projection of at least some information. For example, there are question tags in English that must match the subject of the clause with which they are combined:

	- b. They are very smart, aren't they?

Bender & Flickinger (1999), Flickinger & Bender (2003) therefore propose making information about agreement or the referential index of the subject available on the sentence node.<sup>1</sup> In Sag (2007), all information about phonology, syntax and semantics of the subject is represented as the value of a feature xarg (external argument). Here, *external argument* does not stand for what it does in GB theory, but should be understood in a more general sense. For example, it makes the possessive pronoun accessible on the node of the entire NP. Sag (2007) argues that this is needed to force coreference in English idioms:

	- b. They kept/lost [their / \*our cool].

The use of the xarg feature looks like an exact parallel to accessing the specifier position as we saw in the discussion of GB. However, Sag proposes that complements of prepositions in Polish are also made accessible by xarg since there are data suggesting that higher heads can access elements inside PPs (Przepiórkowski 1999a: Section 5.4.1.2).

In Section 10.6.2 about Sign-based Construction Grammar, we already saw that a theory that only makes the reference to one argument available on the highest node of a projection cannot provide an analysis for idioms of the kind given in (11). This is because the subject is made available with verbal heads, however, it is the object that needs to be accessed in sentences such as (11). This means that one has to be able to formulate constraints affecting larger portions of syntactic structure.

	- b. Jonas Jonas glaubt, believes ihn him tritt kicks ein a Pferd.<sup>3</sup> horse 'Jonas is utterly surprised.'
	- c. # Jonas Jonas glaubt, believes dich you tritt kicks ein a Pferd. horse 'Jonas believes that a horse kicks you.'

<sup>1</sup> See also Sag & Pollard (1991: 89).

<sup>2</sup> Richter & Sailer (2009: 311).

<sup>3</sup> http://www.machandel-verlag.de/der-katzenschatz.html, 2015-07-06.

Theories of grammar with extended locality domains do not have any problems with this kind of data.<sup>4</sup> An example for this kind of theory is TAG. In TAG, one can specify trees of exactly the right size (Abeillé 1988, Abeillé & Schabes 1989). All the material that is fixed in an idiom is simply determined in the elementary tree. Figure 18.5 shows the tree for *kick the bucket* as it is used in (12a).

	- b. Cowboys often kick the bucket.
	- c. He kicked the proverbial bucket.

Figure 18.5: Elementary tree for *kick the bucket*

Since TAG trees can be split up by adjunction, it is possible to insert elements between the parts of an idiom as in (12b,c) and thus explain the flexibility of idioms with regard to adjunction and embedding.<sup>5</sup> Depending on whether the lexical rules for the passive and long-distance dependencies can be applied, the idiom can occur in the relevant variants.

(i) Er he band tied ihr her einen a großen big Bären bear auf. on 'He pulled (a lot of) wool over her eyes.'

In the idiom in (i), *Bär* 'bear' actually means 'lie' and the adjective has to be interpreted accordingly. The relevant tree should therefore contain nodes that contribute semantic information and also say something about the composition of these features.

In the same way, when computing the semantics of noun phrases in TAG and Embodied Construction Grammar, one should bear in mind that the adjective that is combined with a discontinuous NP Construction (see page 342) or an NP tree can have narrow scope over the noun (*all alleged murderers*).

<sup>4</sup>Or more carefully put: they do not have any serious problems since the treatment of idioms in all their many aspects is by no means trivial (Sailer 2000).

<sup>5</sup> Interestingly, variants of Embodied CxG are strikingly similar to TAG. The Ditransitive Construction that was discussed on page 344 allows for additional material to occur between the subject and the verb.

The problems that arise for the semantics construction are also similar. Abeillé & Schabes (1989: 9) assume that the semantics of *John kicked the proverbial bucket* is computed from the parts *John*′ , *kick-thebucket*′ and *proverbial*′ , that is, the added modifiers always have scope over the entire idiom. This is not adequate for all idioms (Fischer & Keil 1996):

#### 18 Binary branching, locality, and recursion

In cases where the entire idiom or parts of the idiom are fixed, it is possible to rule out adjunction to the nodes of the idiom tree. Figure 18.6 shows a pertinent example from Abeillé & Schabes (1989: 7). The ban on adjunction is marked by a subscript NA.

Figure 18.6: Elementary tree for *take into account*

The question that also arises for other theories is whether the efforts that have been made to enforce locality should be abandoned altogether. In our box model in Section 1.5, this would mean that all boxes were transparent. Since plastic boxes do not allow all of the light through, objects contained in multiple boxes cannot be seen as clearly as those in the topmost box (the path of Functional Uncertainty is longer). This is parallel to a suggestion made by Kay & Fillmore (1999) in CxG. Kay and Fillmore explicitly represent all the information about the internal structure of a phrase on the mother node and therefore have no locality restrictions at all in their theory. In principle, one can motivate this kind of theory in parallel to the argumentation in Chapter 17. The argument there made reference to the complexity of the grammatical formalism: the kind of complexity that the language of description has is unimportant, it is only important what one does with it. In the same way, one can say that regardless of what kind of information is in principle accessible, it is not accessed if this is not permitted. This was the approach taken by Pollard & Sag (1987: 143–145).

It is also possible to assume a world in which all the boxes contain transparent areas where it is possible to see parts of their contents. This is more or less the LFG world: the information about all levels of embedding contained in the f-structure is visible to both the inside and the outside. We have already discussed Nordlinger's (1998) LFG analysis of Wambaya on page 310. In Wambaya, words that form part of a noun phrase can be distributed throughout the clause. For example, an adjective that refers to a noun can occur in a separate position from it. Nordlinger models this by assuming that an adjective can make reference to an argument in the f-structure and then agrees with it in terms of case, number and gender. Bender (2008c) has shown that this analysis can be transferred to HPSG: instead of no longer representing an argument on the mother node after it has been combined with a head, simply marking the argument as realized allows us to keep it in the representation (Meurers 1999c; Przepiórkowski 1999b; Müller 2007a: Section 17.4). Meurers (1999c: 199) compares both of these HPSG approaches to different ways of working through a shopping list: in the standard approach taken by Pollard & Sag (1994), one tears away parts of the shopping list once the relevant item has been found. In the other case, the relevant item on the list is crossed out. At the end of the shopping trip, one ends up with a list of what has been bought as well as the items themselves.

I have proposed the crossing-out analysis for depictive predicates in German and English (Müller 2004a, 2008). Depictive predicates say something about the state of a person or object during the event expressed by a verb:

	- b. He saw her naked.

In (13), the depictive adjective can either refer to the subject or the object. However, there is a strong preference for readings where the antecedent noun precedes the depictive predicate (Lötscher 1985: 208). Figure 18.7 on the following page shows analyses for the sentences in (14):

	- b. dass that er he ungewaschen/∗ unwashed die the Äpfel apples isst eats 'that he eats the apples (while he is) unwashed'

Arguments that have been realized are still represented on the upper nodes, however, they are crossed-out and thereby marked as "realized". In German, this preference for the antecedent noun can be captured by assuming a restriction that states that the antecedent noun must not yet have been realized.

It is commonly assumed for English that adjuncts are combined with a VP.

	- b. You can't [[VP give them injections] unconscious ].7

In approaches where the arguments of the verb are accessible at the VP node, it is possible to establish a relation between the depictive predicate and an argument although the antecedent noun is inside the VP. English differs from German in that depictives can refer to both realized (*them* in (15b)) and unrealized (*you* in (15b)) arguments.

<sup>6</sup>Haider (1985b: 94).

<sup>7</sup> Simpson (2005: 17).

Figure 18.7: Analysis of *dass er die Äpfel ungewaschen isst* 'that he the apples unwashed eats' and *dass er ungewaschen die Äpfel isst* 'that he unwashed the apples eat'

Higginbotham (1985: 560) and Winkler (1997) have proposed corresponding non-cancellation approaches in GB theory. There are also parallel suggestions in Minimalist theories: checked features are not deleted, but instead marked as already checked (Stabler 2011b: 14). However, these features are still viewed as inaccessible.

Depending on how detailed the projected information is, it can be possible to see adjuncts and argument in embedded structures as well as their phonological, syntactic and semantic properties. In the CxG variant proposed by Kay and Fillmore, all information is available. In LFG, information about grammatical function, case and similar properties is accessible. However, the part of speech is not contained in the f-structure. If the part of speech does not stand in a one-to-one relation to grammatical function, it cannot be restricted using selection via f-structure. Nor is phonological information represented completely in the f-structure. If the analysis of idioms requires nonlocal access to phonological information or part of speech, then this has to be explicitly encoded in the f-structure (see Bresnan (1982b: 46–50) for more on idioms).

In the HPSG variant that I adopt, only information about arguments is projected. Since arguments are always represented by descriptions of type *synsem*, no information about their phonological realization is present. However, there are daughters in the structure so that it is still possible to formulate restrictions for idioms as in TAG or Construction Grammar (see Richter & Sailer (2009) for an analysis of the 'horse' example in (11a)). This may seem somewhat like overkill: although we already have the tree structure, we are still projecting information about arguments that have already been realized (unfortunately these also contain information about their arguments and so on). At this point, one could be inclined to prefer TAG or LFG since these theories only make use of one extension of locality: TAG uses trees of arbitrary or rather exactly the necessary size and LFG makes reference to a complete f-structure. However, things are not quite that simple: if one wants to create a relation to an argument when adjoining a depictive

predicate in TAG, then one requires a list of possible antecedents. Syntactic factors (e.g., reference to dative vs. accusative noun phrases, to argument vs. adjuncts, coordination of verbs vs. nouns) play a role in determining the referent noun, this cannot be reduced to semantic relations. Similarly, there are considerably different restrictions for different kinds of idioms and these cannot all be formulated in terms of restrictions on f-structure since f-structure does not contain information about parts of speech.

One should bear in mind that some phenomena require reference to larger portions of structure. The majority of phenomena can be treated in terms of head domains and extended head domains, however, there are idioms that go beyond the sentence level. Every theory has to account for this somehow.

# **18.3 Recursion**

Every theory in this book can deal with self-embedding in language as it was discussed on page 4. The example (2) is repeated here as (16):

(16) that Max thinks [that Julia knows [that Otto claims [that Karl suspects [that Richard confirms [that Friederike is laughing]]]]]

Most theories capture this directly with recursive phrase structure rules or dominance schemata. TAG is special with regard to recursion since recursion is factored out of the trees. The corresponding effects are created by an adjunction operation that allows any amount of material to be inserted into trees. It is sometimes claimed that Construction Grammar cannot capture the existence of recursive structure in natural language (e.g., Leiss 2009: 269). This impression is understandable since many analyses are extremely surface-oriented. For example, one often talks of a [Sbj TrVerb Obj] construction. However, the grammars in question also become recursive as soon as they contain a sentence embedding or relative clause construction. A sentence embedding construction could have the form [Sbj that-Verb that-S], where a that-Verb is one that can take a sentential complement and that-S stands for the respective complement. A *that*-clause can then be inserted into the that-S slot. Since this *that*-clause can also be the result of the application of this construction, the grammar is able to produce recursive structures such as those in (17):

(17) Otto claims [that-S that Karl suspects [that-S that Richard sleeps]].

In (17), both *Karl suspects that Richard sleeps* and the entire clause are instances of the [Sbj that-Verb that-S] construction. The entire clause therefore contains an embedded subpart that is licensed by the same construction as the clause itself. (17) also contains a constituent of the category *that*-S that is embedded inside of *that*-S. For more on recursion and self-embedding in Construction Grammar, see Verhagen (2010).

Similarly, every Construction Grammar that allows a noun to combine with a genitive noun phrase also allows for recursive structures. The construction in question could have the form [Det N NP[gen] ] or [ N NP[gen] ]. The [Det N NP[gen] ] construction licenses structures such as (18):

(18) [NP des the Kragens collar [NP des of.the Mantels coat [NP der of.the Vorsitzenden]]] chairwoman 'the collar of the coat of the chairwoman'

Jurafsky (1996) and Bannard, Lieven & Tomasello (2009) use probabilistic context-free grammars (PCFG) for a Construction Grammar parser with a focus on psycholinguistic plausibility and modeling of acquisition. Context-free grammars have no problems with self-embedding structures like those in (18) and thus this kind of Construction Grammar itself does not encounter any problems with self-embedding.

Goldberg (1995: 192) assumes that the resultative construction for English has the following form:

(19) [SUBJ [V OBJ OBL]]

This corresponds to a complex structure as assumed for elementary trees in TAG. LTAG differs from Goldberg's approach in that every structure requires a lexical anchor, that is, for example (19), the verb would have to be fixed in LTAG. But in Goldberg's analysis, verbs can be inserted into independently existing constructions (see Section 21.1). In TAG publications, it is often emphasized that elementary trees do not contain any recursion. The entire grammar is recursive however, since additional elements can be added to the tree using adjunction and – as (17) and (18) show – insertion into substitution nodes can also create recursive structures.

# **19 Empty elements**

This chapter deals with empty elements. I first discuss the general attitude of various research traditions towards empty elements and then show how they can be eliminated from grammars (Section 19.2). Section 19.3 discusses empty elements that have been suggested in order to facilitate semantic interpretation. Section 19.4 discusses possible motivation for empty elements with a special focus on cross-linguistic comparison and the final Section 19.5 shows that certain accounts with transformations, lexical rules, and empty elements can be translated into each other.

# **19.1 Views on empty elements**

One point that is particularly controversial among proponents of the theories discussed in this book is the question of whether one should assume empty elements or not. The discussion of empty elements is quite old: there was already some investigation in 1961 with reference to phrase structure grammars (Bar-Hillel, Perles & Shamir 1961). The discussion of the status of empty elements has carried on ever since (see Löbner 1986, Wunderlich 1987, 1989, von Stechow 1989, Haider 1997a, Sag 2000, Bouma, Malouf & Sag 2001, Levine & Hukari 2006, Müller 2004e, Arnold & Spencer 2015, for example). There are sometimes empirical differences between analyses that assume empty elements and those that do not (Arnold & Spencer 2015), but often this is not the case. Since empty elements often feature prominently in the argumentation for or against particular theories, I will discuss how they have been used in somewhat more detail here.

In GB theory, empty elements were assumed for traces of movement (verb movement and fronting of phrases) as well as for deleted elements in elliptical constructions. Starting with the analysis of Larson (1988), more and more empty heads have been introduced to ensure uniformity of structures and certain semantic interpretations (binding and scope, see Section 4.1.4 on little *v*). Other examples of an empty element that was introduced in order to maintain particular generalizations are the empty expletives of Coopmans (1989: 734) and Postal (2004: Chapter 1). These fill the subject position in inversion structures in English, where the position preceding the verb is occupied by a PP and not by an overt subject NP. Similarly, Safir (1985: Section 4) assumes that impersonal passives in German contain empty expletive subjects. Grewendorf (1995: 1311) assumes that the subject position in impersonal passives and passives without subject movement is in fact occupied by an empty expletive. Also, see Newmeyer (2005: 91) and Lohnstein (2014: 180) for this assumption with regard to the passive in German. Sternefeld (2006: Section II.3.3.3) assumes that there is an empty expletive subject in impersonal passives and subjectless sentences such as (1).

	- b. Mich me.acc dürstet. is.thirsty 'I am thirsty.'

On page 166, we discussed Stabler's proposal for the analysis of sentences with intransitive verbs. Since, following Chomsky (2008: 146), the element that first merges with a head is the complement, intransitive verbs pose a problem for the theory. This problem is solved by Stabler by assuming that intransitive verbs are combined with an empty object (Veenstra 1998: 61, 124). Since these silent elements do not contribute to the meaning of an expression, we are also dealing with empty expletive pronouns.

In other theories, there are researchers that reject empty elements as well as those who assume them. In Categorial Grammar, Steedman suggests an analysis of nonlocal dependencies that does without empty elements (see Section 8.5), but as Pollard (1988) has shown, Steedman's analysis requires various kinds of type raising for NPs or a correspondingly high number of complex lexical items for relative pronouns (see Section 8.5.3). On the other hand, König (1999) uses traces. In GPSG, there is the traceless analysis of extraction by Uszkoreit (1987: 76–77) that we discussed in Section 5.4, but there is also the analysis of Gazdar, Klein, Pullum & Sag (1985: 143) that uses traces. In LFG, there are both analyses with traces (Bresnan 2001: 67) and those without (see Kaplan & Zaenen (1989), Dalrymple et al. (2001) and Section 7.3 and Section 7.5). Many of the phrasal analyses in HPSG are born out of the wish to avoid empty elements (see Section 21.10). An example for this is the relative clause analysis by Sag (1997) that replaces the empty relativizer in Pollard & Sag (1994) with a corresponding phrasal rule. On the other hand we have Bender (2001) and Sag, Wasow & Bender (2003: 464), who assume a silent copula, Borsley (1999a, 2009, 2013), who argues for empty elements in the grammar of Welsh and Alqurashi & Borsley (2012), who suggest an empty relativizer for Arabic. Another attempt to eliminate empty elements from HPSG was to handle long-distance dependencies not by traces but rather in the lexicon (Bouma, Malouf & Sag 2001). As Levine & Hukari (2006) could show, however, theories of extraction that introduce longdistance dependencies lexically have problems with the semantic interpretation of coordinate structures. For a suggestion of how to solve these problems, see Chaves (2009). There are many TAG analyses without silent elements in the lexicon (see Section 12.5 and Kroch (1987), for example), however there are variants of TAG such as that of Kallmeyer (2005: 194), where a trace is assumed for the reordering of constituents in sentences with a verbal complex. Rambow (1994: 10–11) assumes an empty head in every verb phrase (see Section 12.6.2 on V-TAG).<sup>1</sup> In Dependency Grammar, Mel'čuk (1988: 15, 303; 2003: 219), Starosta (1988: 253), Eroms (2000: 471–472), Hudson (2007: Section 3.7; 2010a: 166) and Engel (2014) assume empty elements for determiners, nouns, ellipsis, imperatives,

<sup>1</sup>Note that empty elements in TAG are slightly different from empty elements in other theories. In TAG the empty elements are usually part of elementary trees, that is, they are not lexical items that are combined with other material.

copulas, controlled infinitives, and for coordinate structures, but Groß & Osborne (2009: 73) reject empty elements (with the exception of ellipsis, Osborne 2018c).

No empty elements are assumed in Construction Grammar (Michaelis & Ruppenhofer 2001: 49–50; Goldberg 2003a: 219; Goldberg 2006: 10), the related Simpler Syntax (Culicover & Jackendoff 2005) as well as in Cognitive Grammar.<sup>2</sup> The argumentation against empty elements runs along the following lines:


This begs the question of whether all the premises on which the conclusion is based actually hold. If we consider an elliptical construction such as (2), then it is clear that a noun has been omitted:

	- I take the.acc red.acc ball and you the.acc blue.acc

'I'll take the red ball and you take the blue one.'

Despite there being no noun in *den blauen* 'the blue', this group of words behaves both syntactically and semantically just like a noun phrase. (2) is of course not necessarily evidence for there being empty elements, because one could simply say that *den blauen* is a noun phrase consisting only of an article and an adjective (Wunderlich 1987).

Similar to the fact that it is understood that a noun is missing in (2), speakers of English know that something is missing after *like*:

(3) Bagels, I like.

Every theory of grammar has to somehow account for these facts. It must be represented in some way that *like* in (3) behaves just like a verb phrase that is missing something. One possibility is to use traces. Bar-Hillel, Perles & Shamir (1961: 153, Lemma 4.1) have shown that it is possible to turn phrase structure grammars with empty elements into those without any. In many cases, the same techniques can be applied to the theories presented here and we will therefore discuss the point in more detail in the following section.

# **19.2 Eliminating empty elements from grammars**

It is possible to turn a grammar with empty elements (also called *epsilon*) into a grammar without these by removing all categories that can be rewritten by an epsilon in every rule that uses such categories and then add the respective rules without the empty elements to the grammar. The following example has an epsilon rule for np. One therefore has to replace all rules containing the symbol np with new rules without this np symbol. (5) shows the result of this conversion of the grammar in (4):

<sup>2</sup>However, Fillmore (1988: 51) did not rule them out.

$$\begin{array}{rcl} \text{(4)} & \overline{\text{v}} & \rightarrow \text{np}, \text{v} \\ & \overline{\text{v}} & \rightarrow \text{np}, \text{pp}, \text{v} \\ & \text{np} & \rightarrow \epsilon \\\\ \text{(5)} & \overline{\text{v}} & \rightarrow \text{np}, \text{v} \\ & \overline{\text{v}} & \rightarrow \text{v} \\ & \overline{\text{v}} & \rightarrow \text{np}, \text{pp}, \text{v} \\ & \overline{\text{v}} & \rightarrow \text{pp}, \text{v} \end{array}$$

This can also lead to cases where all elements on the right-hand side of a rule are removed. Thus, what one has done is actually create a new empty category and then one has to apply the respective replacement processes again. We will see an example of this in a moment. Looking at the pair of grammars in (4)–(5), it is clear that the number of rules has increased in (5) compared to (4) despite the grammars licensing the same sequences of symbols. The fact that an NP argument can be omitted is not expressed directly in (5) but instead is implicitly contained in two rules.

If one applies this procedure to the HPSG grammar in Chapter 9, then the trace does not have a specific category such as NP. The trace simply has to be compatible with a non-head daughter. As the examples in (6) show, adjuncts, arguments and parts of verbal complexes can be extracted.

	- b. Oft often liest reads er he die the Berichte reports t nicht. not 'Often, he does not read the reports.'
	- c. Lesen read wird will er he die the Berichte reports t müssen. must 'He will have to read the reports.'

The relevant elements are combined with their head in a specific schema (Head-Argument Schema, Head-Adjunct Schema, Predicate Complex Schema). See Chapter 9 for the first two schemata; the Predicate Complex Schema is motivated in detail in Müller (2002a: Chapter 2; 2007a: Chapter 15). If one wishes to do without traces, then one needs further additional schemata for the fronting of adjuncts, of arguments and of parts of predicate complexes. The combination of a head with a trace is given in Figure 19.1 on the next page. The trace-less analysis is shown in Figure 19.2 on the facing page. In Figure 19.1, the element in the comps list of *kennen* is identified with the synsem value of the trace 4 . The lexical entry of the trace prescribes that the local value of the trace should be identical to the element in the inher|slash list.

The Non-Local Feature Principle (page 305) ensures that the slash information is present on the mother node. Since an argument position gets saturated in Head-Argument structures, the accusative object is no longer contained in the comps list of the mother node.

Figure 19.1: Introduction of information about long-distance dependencies with a trace

Figure 19.2: Introduction of information about long-distance dependencies using a unary projection

Figure 19.2 shows the parallel trace-less structure. The effect that one gets by combining a trace in argument position in Head-Argument structures is represented directly on the mother node in Figure 19.2: the local value of the accusative object was identified with the element in inher|slash on the mother node and the accusative object does not occur in the valence list any more.

The grammar presented in Chapter 9 contains another empty element: a verb trace. This would then also have to be eliminated.

	- b. Oft often liest reads er he die the Berichte reports t nicht not t . 'Often, he does not read the reports.'
	- c. Lesen read wird will er he die the Berichte reports t müssen must t . 'He will have to read the reports.'

Figure 19.3 on the next page shows the combination of a verb trace with an accusative object. The verb trace is specified such that the dsl value is identical to the local value of

Figure 19.3: Analysis of verb position with verb trace

the trace (see p. 299). Since dsl is a head feature, the corresponding value is also present on the mother node. Figure 19.4 shows the structures that we get by omitting the empty node. This structure may look odd at first sight since a noun phrase is projected to a

Figure 19.4: Analysis of verb position using a unary projection

verb (see page 237 for similar verb-less structures in LFG). The information about the fact that a verb is missing in the structure is equally contained in this structure as in the structure with the verb trace. It is the dsl value that is decisive for the contexts in which the structure in Figure 19.4 can appear. This is identical to the value in Figure 19.3 and contains the information that a verb that requires an accusative object is missing in the structure in question. Until now, we have seen that extraction traces can be removed from the grammar by stipulating three additional rules. Similarly, three new rules are needed for the verb trace. Unfortunately, it does not stop here as the traces for extraction and head movement can also interact. For example, the NP in the tree in Figure 19.4 could be an extraction trace. Therefore, the combination of traces can result in more empty elements that then also have to be eliminated. Since we have three schemata, we will have three new empty elements if we combine the non-head daughter with an extraction trace and the head daughter with a verb trace. (8) shows these cases:


These three new traces can occur as non-head daughters in the Head-Argument Schema and thus one would require three new schemata for Head-Argument structures. Using these schemata, it then becomes possible to analyze the sentences in (8).

Six further schemata are required for the examples in (9) and (10) since the three new traces can each occur as heads in Head-Argument structures (9) and Head-Adjunct structures (10):

	- b. Oft often liest reads er he [ihn it t t ]. 'He often reads it.'
	- c. Lesen read wird will er he [ihn it t t ]. 'He will read it.'
	- b. Oft often liest reads er he ihn it [nicht not t t ]. 'He often doesn't read it'
	- c. Lesen reads wird will er he ihn it [nicht not t t ]. 'He won't read it.'

Eliminating two empty elements therefore comes at the price of twelve new rules. These rules are not particularly transparent and it is not immediately obvious why the mother node describes a linguistic object that follows general grammatical laws. For example, there are no heads in the structures following the pattern in Figure 19.4. Since there is no empirical difference between the theoretical variant with twelve additional schemata

#### 19 Empty elements

and the variant with two empty elements, one should prefer the theory that makes fewer assumptions (Occam's Razor) and that is the theory with two empty elements.

One might think that the problem discussed here is just a problem specific to HPSG not shared by trace-less analyses such as the LFG approach that was discussed in Section 7.5. If we take a closer look at the rule proposed by Dalrymple (2006: 84), we see that the situation in LFG grammars is entirely parallel. The brackets around the category symbols mark their optionality. The asterisk following the PP means that any number of PPs (zero or more) can occur in this position.

(11) V′ → (V) (NP) PP\*

This means that (11) is a shorthand for rules such as those in (12):

$$\begin{aligned} \text{(12)} \quad \text{a. } \mathsf{V'} &\rightarrow \mathsf{V} \\ \text{b. } \mathsf{V'} &\rightarrow \mathsf{V} \text{ NP} \\ \text{c. } \mathsf{V'} &\rightarrow \mathsf{V} \text{ NP} \\ \text{d. } \ \mathsf{V'} &\rightarrow \mathsf{V} \text{ NP} \text{ PP} \\ \text{e. } \ldots \\ \text{f. } \ \mathsf{V'} &\rightarrow \mathsf{NP} \\ \text{g. } \ \mathsf{V'} &\rightarrow \mathsf{NP} \text{ PP} \\ \text{h. } \ \mathsf{V'} &\rightarrow \mathsf{NP} \text{ PP} \text{ PP} \\ \text{i. } \ldots \end{aligned}$$

Since all the elements on the right-hand side of the rule are optional, the rule in (11) also stands for (13):

$$\text{(13)}\quad \text{V}' \to \epsilon$$

Thus, one does in fact have an empty element in the grammar although the empty element is not explicitly listed in the lexicon. This follows from the optionality of all elements on the right-hand side of a rule. The rule in (12f) corresponds to the schema licensed by the structure in Figure 19.4. In the licensed LFG structure, there is also no head present. Furthermore, one has a large number of rules that correspond to exactly the schemata that we get when we eliminate empty elements from an HPSG grammar. This fact is, however, hidden in the representational format of the LFG rules. The rule schemata of LFG allow for handy abbreviations of sometimes huge sets of rules (even infinite sets when using '\*').

Pollard (1988) has shown that Steedman's trace-less analysis of long-distance dependencies is not without its problems. As discussed in Section 8.5.3, a vast number of recategorization rules or lexical entries for relative pronouns are required.

# **19.3 Empty elements and semantic interpretation**

In this section, I discuss an analysis that assumes empty elements in order to allow for different readings of particular sentences. I then show how one can use so-called underspecification approaches to do without empty elements.

Sentences such as (14) are interesting since they have multiple readings (see Dowty 1979: Section 5.6) and it is not obvious how these can be derived.

(14) dass that Max Max alle all Fenster windows wieder again öffnete opened 'that Max opened all the windows again'

There is a difference between a repetitive and a restitutive reading: for the repetitive reading of (14), Max has to have opened every window at least once before, whereas the restitutive reading only requires that all windows were open at some point, that is, they could have been opened by someone else.

These different readings are explained by decomposing the predicate *open*′ into at least two sub-predicates. Egg (1999) suggests the decomposition into CAUSE and *open*′ :

(15) CAUSE(x, *open*′ (y))

This means that there is a CAUSE operator that has scope over the relation *open*′ . Using this kind of decomposition, it is possible to capture the varying scope of *wieder* 'again': in one of the readings, *wieder* scopes over CAUSE and it scopes over *open*′ but below CAUSE in the other. If we assume that *öffnen* has the meaning in (15), then we still have to explain how the adverb can modify elements of a word's meaning, that is, how *wieder* 'again' can refer to *open*′ . Von Stechow (1996: 93) developed the analysis in Figure 19.5 on the next page. AgrS and AgrO are functional heads proposed for subject and object agreement in languages like Basque and have been adopted for German (see Section 4.6). Noun phrases have to be moved from the VoiceP into the specifier position of the AgrS and AgrO heads in order to receive case. T stands for Tense and corresponds to Infl in the GB theory (see Section 3.1.5 and Section 4.1.5). What is important is that there is the Voice head and the separate representation of *offen* 'open' as the head of its own phrase. In the figure, everything below Voice′ corresponds to the verb *öffnen*. By assuming a separate Voice head that contributes causative meaning, it becomes possible to derive both readings in syntax: in the reading with narrow scope of *wieder* 'again', the adverb is adjoined to the XP and has scope over open(x). In the reading with wide scope, the adverb attaches to VoiceP or some higher phrase and therefore has scope over CAUSE(BECOME(open(x))).

Jäger & Blutner (2003) point out that this analysis predicts that sentences such as (16) only have the repetitive reading, that is, the reading where *wieder* 'again' has scope over CAUSE.

(16) dass that Max Max wieder again alle all Fenster windows öffnete opened

This is because *wieder* precedes *alle Fenster* and therefore all heads that are inside VoiceP. Thus, *wieder* can only be combined with AgrOP or higher phrases and therefore has (too) wide scope. (16) does permit a restitutive reading, however: all windows were open at an earlier point in time and Max reestablishes this state.

Egg (1999) develops an analysis for these *wieder* cases using Constraint Language for Lambda-Structures (CLLS). CLLS is an underspecification formalism, that is, no logical

Figure 19.5: Decomposition in syntactic structures

formulae are given but instead expressions that describe logical formulae. Using this kind of expressions, it is possible to leave scope relations underspecified. I have already mentioned Minimal Recursion Semantics (MRS) (Copestake, Flickinger, Pollard & Sag 2005) in several chapters of this book. As well as CLLS, MRS together with Underspecified Discourse Representation Theory (Reyle 1993, Frank & Reyle 1995) and Hole Semantics (Bos 1996, Blackburn & Bos 2005) all belong to the class of underspecification formalisms. See Baldridge & Kruijff (2002) for an underspecification analysis in Categorial Grammar and Nerbonne (1993) for an early underspecification analysis in HPSG. In the following, I will reproduce Egg's analysis in an MRS-like notation.

Before we turn to (14) and (16), let us consider the simpler sentence in (17):

(17) dass that Max Max alle all Fenster windows öffnete opened 'that Max opened all the windows' This sentence can mean that in a particular situation, it is true of all windows that Max opened them. A less readily accessible reading is the one in which Max causes all of the windows to be open. It is possible to force this reading if one rules out the first reading through contextual information (Egg 1999):

(18) Erst first war was nur only die the Hälfte half der of.the Fenster windows im in.the Bus bus auf, open aber but dann then öffnete opened Max Max alle Fenster.

all windows

'At first, only half of the windows in the bus were open, but then Max opened all of the windows.'

Both readings under discussion here differ with regard to the scope of the universal quantifier. The reading where Max opens all the windows himself corresponds to wide scope in (19a). The reading where some windows could have already been open corresponds to (19b):

$$\begin{aligned} \text{(19)} \quad \text{a. } \forall \text{ x } \text{wind} \text{ow}'(\text{x}) &\to \text{CAUSE}(\text{max}', \text{open}'(\text{x}))\\ \text{b. } \text{CAUSE}(\text{max}', \forall \text{ x } \text{wind} \text{ow}'(\text{x}) &\to \text{open}'(\text{x})) \end{aligned}$$

Using underspecification, both of these readings can be represented in one dominance graph such as the one given in Figure 19.6. Each relation in Figure 19.6 has a name that

Figure 19.6: Dominance graph for *Max alle Fenster öffnete*

one can use to refer to the relation or "grasp" it. These names are referred to as *handle*. The dominance graph states that ℎ0 dominates both ℎ1 and ℎ6 and that ℎ2 dominates ℎ4, ℎ3 dominates ℎ5, and ℎ7 dominates ℎ5. The exact scopal relations are underspecified: the universal quantifier can have scope over CAUSE or CAUSE can have scope over the universal quantifier. Figures 19.7 and 19.8 show the variants of the graph with resolved scope. The underspecified graph in Figure 19.6 does not say anything about the relation between ℎ3 and ℎ6. The only thing it says is that ℎ3 somehow has to dominate ℎ5. 19 Empty elements

Figure 19.7: Dominance graph for the reading ∀ x window(x) → CAUSE(max,open(x)).

Figure 19.8: Graph for te reading CAUSE(max, ∀ x window(x) → open(x)).

In Figure 19.7 every (ℎ3) dominates CAUSE (ℎ6) and CAUSE dominates open (ℎ5). So, *every*′ dominates *open*′ indirectly. In Figure 19.8, CAUSE dominates *every*′ and *every*′ dominates *open*′ . Again the constraints of Figure 19.6 are fulfilled, but ℎ7 dominates ℎ5 only indirectly.

The fact that the quantifier dominates ℎ4 is determined by the lexical entry of the quantifier. The fact that the quantifier dominates ℎ5 does not have to be made explicit in the analysis since the quantifier binds a variable in the relation belonging to ℎ5, namely x. The dominance relation between ℎ7 and ℎ5 is always determined in the lexicon since CAUSE and *open*′ both belong to the semantic contribution of a single lexical entry.

The exact syntactic theory that one adopts for this analysis is, in the end, not of great importance. I have chosen HPSG here. As Figure 19.9 on the next page shows, the analysis of *alle Fenster öffnet* contains a simple structure with a verb and an object. This struc-

Figure 19.9: MRS analysis of *alle Fenster öffnete*

ture does not differ from the one that would be assumed for *alle Kinder kennt* 'all children know', involving the semantically simplex verb *kennen* 'to know'. The only difference comes from the meaning of the individual words involved. As shown in Section 9.1.6, relations between individual words are passed on upwards. The same happens with scopal restrictions. These are also represented in lists. hcons stands for *handle constraints*. = in h0 = h6 stand for the equality *modulo* quantifier scope.

Egg lists the following readings for the sentence in (16) – repeated here as (20):

	- 1. Max opened every window and he had already done that at least once for each window (*again*′ (∀(CAUSE(open))); repetitive)
	- 2. Max caused every window to be open and he had done that at least once before (*again*′ (CAUSE(∀(open))); repetitive)
	- 3. At some earlier point in time, all windows were simultaneously open and Max re-established this state (CAUSE(*again*′ (∀(open))); restitutive)

These readings correspond to the dominance graph in Figure 19.10 on the following page. Figure 19.11 on the next page shows the graph for (14) – repeated here as (21):

(21) dass that Max Max alle all Fenster windows wieder again öffnete opened

To derive these dominance graphs from the ones without *wieder* 'again', all one has to do is add the expression h8:again(h9) and the dominance requirements that demand

#### 19 Empty elements

Figure 19.10: Dominance graph for *Max wieder alle Fenster öffnete* 'that Max opened all the windows again'

.

Figure 19.11: Dominance graph for *Max alle Fenster wieder öffnete* 'that Max opened all the windows again'

that ℎ9 dominates quantifiers occurring to the right of *wieder* and that it is dominated by quantifiers to the left of *wieder*.

It is therefore unproblematic to derive the relevant readings for modification by *wieder* without empty elements for CAUSE and BECOME. The meaning of the word *öffnen* is decomposed in a similar way but the decomposed meaning is assigned to a single element, the verb. By underspecification of the scopal relations in the lexicon, the relevant readings can then be derived.

# **19.4 Evidence for empty elements**

As previously discussed, grammarians agree that both linguists and speakers notice when there is a constituent missing from a string of words. For cases where it can be shown that analyses with or without traces are indistinguishable empirically, then one can assume empty elements. Nevertheless, the learnability argument put forward by Construction Grammarians has some validity: if one assumes that there is no or little innate linguistic knowledge, then it is not possible to motivate empty elements with data from other languages. This means that just because Basque shows object agreement, this does not mean that one can assume an empty head for object agreement (AgrO) in a grammar of German as for instance von Stechow (1996) and Meinunger (2000) do. Since there is no object agreement in German, there would be no way for the child to learn the fact that there is an AgrO head. Knowledge about AgrO must therefore be innate. Since the assumption of innate linguistic knowledge is controversial (see Chapter 13), any theory that uses cross-linguistic data to motivate the use of empty elements is on shaky ground.

Cross-linguistic considerations can only be drawn upon if there are no empirical differences between multiple alternative analyses compatible and motivated by the language under consideration. In this case, one should follow Occam's Razor and choose the analysis which is compatible with analyses of other languages (see Müller 2015c and Chapter 23.2).

# **19.5 Transformations, lexical rules, and empty elements**

In the discussion of the passive in the framework of TAG, it became clear that lexical rules correspond to particular transformations, namely those which have some relation to a lexical item (lexically governed transformations, Dowty 1978; for the discussion of transformations and lexical rules, see Bresnan (1978) and Bresnan & Kaplan (1982)). In the respective variants of TAG, lexical rules establish a relation between a lexical item for an active tree with a lexical item of a passive tree. Both the active and passive tree can be extended by adjunction.

In theories such as Categorial Grammar, the situation is similar: since the direction in which a functor expects to find its argument is fixed for languages such as English, the lexical item stands for an entire tree. Only the attachment of adjuncts is not yet specified in lexical items. The positions in the tree where the adjuncts can occur depend on the properties of the adjuncts. In Section 8.4, we have seen suggestions for treatments of languages with free constituent order. If the direction of combination is not fixed in the lexicon, then the lexical item can occur in a number of trees. If we compare lexical rules that can be applied to this kind of lexical items with transformations, we see that lexical rules create relations between different sets of trees.

In HPSG analyses, this works in a similar way: lexical rules relate lexical items with differing valence properties to each other. In HPSG grammars of English, there is normally a schema that licenses a VP containing the verb and all its complements as well

as a schema that connects the subject to the VP (Pollard & Sag 1994: 39). In the lexical items for finite verbs, it is already determined what the tree will look like in the end. As in Categorial Grammar, adjuncts in HPSG can be combined with various intermediate projections. Depending on the dominance schemata used in a particular grammar, the lexical item will determine the constituent structure in which it can occur or allow for multiple structures. In the grammar of German proposed in Chapter 9, it is possible to analyze six different sequences with a lexical item for a ditransitive verb, that is, the lexical item can – putting adjuncts aside – occur in six different structures with verb-final order. Two sequences can be analyzed with the passive lexical item, which only has two arguments. As in Categorial Grammar, sets of licensed structures are related to other sets of licensed structures. In HPSG theorizing and also in Construction Grammar, there have been attempts to replace lexical rules with other mechanisms since their "status is dubious and their interaction with other analyses is controversial" (Bouma, Malouf & Sag 2001: 19). Bouma, Malouf & Sag (2001) propose an analysis for extraction that, rather than connecting lexical items with differing valence lists, establishes a relation between a subset of a particular list in a lexical item and another list in the same lexical item. The results of the two alternative analyses are shown in (22) and (23), respectively:

$$\begin{array}{rcl} \text{(22)} & \text{a. } \begin{bmatrix} \text{comps} & \langle \text{NP[nom]}, \text{NP[acc]} \rangle\\ \text{SLASH} & \langle \rangle \end{bmatrix} \\\\ \text{b. } \begin{bmatrix} \text{comps} & \langle \text{NP[nom]} \rangle\\ \text{SLASH} & \langle \text{NP[acc]} \rangle \end{bmatrix} \end{array}$$

In (22), (22a) is the basic entry and (22b) is related to (22a) via a lexical rule. The alternative analysis would only involve specifying the appropriate value of the arg-st feature<sup>3</sup> and the comps and slash value is then derived from the arg-st value using the relevant constraints. (23) shows two of the licensed lexical items.

$$\begin{array}{rcl} \text{(23)} & \text{a. } \begin{bmatrix} \text{ARG-ST} & \langle \text{NP}[nom], \text{NP}[acc] \rangle\\ \text{comps} & \langle \text{NP}[nom], \text{NP}[acc] \rangle\\ \text{SLASH} & \langle \rangle \end{bmatrix} \\\\ & \begin{bmatrix} \text{ARG-ST} & \langle \text{NP}[nom], \text{NP}[acc] \rangle\\ \text{comps} & \langle \text{NP}[nom] \rangle\\ \text{SLASH} & \langle \text{NP}[acc] \rangle \end{bmatrix} \end{array}$$

If we want to eliminate lexical rules entirely in this way, then we would require an additional feature for each change.<sup>4</sup> Since there are many interacting valence-changing processes, things only work out with the stipulation of a large number of auxiliary features. The consequences of assuming such analyses have been discussed in detail in Müller (2007a: Section 7.5.2.2). The problems that arise are parallel for inheritance-

<sup>3</sup> arg-st stands for *Argument Structure*. The value of arg-st is a list containing all the arguments of a head. For more on arg-st, see Section 9.1.1.

<sup>4</sup>Alternatively, one could assume a very complex relation that connects arg-st and comps. But this would then have to deliver the result of an interaction of a number of phenomena and the interaction of these phenomena would not be captured in a transparent way.

based approaches for argument structure-changing processes: they also require auxiliary features since it is not possible to model embedding and multiple changes of valence information with inheritance. See Section 10.2.

Furthermore, the claim that the status of lexical rules is dubious must be rejected: there are worked-out formalizations of lexical rules (Meurers 2001, Copestake & Briscoe 1992, Lascarides & Copestake 1999) and their interaction with other analyses is not controversial. Most HPSG implementations make use of lexical rules and the interaction of a number of rules and constraints can be easily verified by experiments with implemented fragments.

Jackendoff (1975) presents two possible conceptions of lexical rules: in one variant, the lexicon contains all words in a given language and there are just redundancy rules saying something about how certain properties of lexical entries behave with regard to properties of other lexical entries. For example, *les*- 'read-' and *lesbar* 'readable' would both have equal status in the lexicon. In the other way of thinking of lexical rules, there are a few basic lexical entries and the others are derived from these using lexical rules. The stem *les*- 'read-' would be the basic entry and *lesbar* would be derived from it. In HPSG, the second of the two variants is more often assumed. This is equivalent to the assumption of unary rules. In Figure 9.9 on page 295, this has been shown accordingly: the verb *kennt* 'knows' is mapped by a lexical rule to a verb that selects the projection of an empty verbal head. With this conception of lexical rules, it is possible to remove lexical rules from the grammar by assuming binary-branching structures with an empty head rather than unary rules. For example, in HPSG analyses of resultative constructions such as (24), lexical rules have been proposed (Verspoor 1997; Wechsler 1997; Wechsler & Noh 2001; Müller 2002a: Chapter 5).

(24) [dass] that Peter Peter den the Teich pond leer empty fischt fishes 'that Peter fishes the pond empty'

In my own analysis, a lexical rule connects a verb used intransitively to a verb that selects an accusative object and a predicate. Figure 19.12 on the following page shows the corresponding tree. If we consider what (24) means, then we notice that the fishing act causes the pond to become empty. This causation is not contained in any of the basic lexical items for the words in (24). In order for this information to be present in the semantic representation of the entire expression, it has to be added by means of a lexical rule. The lexical rule says: if a verb is used with an additional predicate and accusative object, then the entire construction has a causative meaning.

Figure 19.13 on page 589 shows how a lexical rule can be replaced by an empty head. The empty head requires the intransitive verb and additionally an adjective, an accusative object and a subject. The subject of *fischt* 'fishes' must of course be identical to the subject that is selected by the combination of *fischt* and the empty head. This is not shown in the figure. It is possible, however, to establish this identity (see Hinrichs & Nakazawa 1994a). The causative semantics is contributed by the empty head in this analysis. The trick that is being implemented here is exactly what was done in Section 19.2, just in the opposite direction: in the previous section, binary-branching structures with an empty

Figure 19.12: Analysis of the resultative construction with a lexical rule

daughter were replaced by unary-branching structures. In this section, we have replaced unary-branching structures with binary-branching structures with an empty daughter.<sup>5</sup>

We have therefore seen that certain transformations can be replaced by lexical rules and also that lexical rules can be replaced by empty heads. The following chapter deals with the question of whether phenomena like extraction, scrambling, and passive should be described with the same tool as in GB/Minimalism or with different tools as in LFG and HPSG.

	- b. He is a trickster.

<sup>5</sup>Here, we are discussing lexical rules, but this transformation trick can also be applied to other unary rules. Semanticists often use such rules for type shifting. For example, a rule that turns a referential NP such as *a trickster* in (i.a) into a predicative one (i.b) (Partee 1986).

These changes can be achieved by a unary rule that is applied to an NP or with a special empty head that takes an NP as its argument. In current Minimalist approaches, empty heads are used (Ramchand 2005: 370), in Categorial Grammar and HPSG unary-branching rules are more common (Flickinger 2008: 91–92; Müller 2009c, 2012b).

# **20 Extraction, scrambling, and passive: one or several descriptive devices?**

An anonymous reviewer suggested discussing one issue in which transformational theories differ from theories like LFG and HPSG. The reviewer claimed that Transformational Grammars use just one tool for the description of active/passive alternations, scrambling, and extraction, while theories like LFG and HPSG use different techniques for all three phenomena. If this claim were correct and if the analyses made correct predictions, the respective GB/Minimalism theories would be better than their competitors, since the general aim in science is to develop theories that need a minimal set of assumptions. I already commented on the analysis of passive in GB in Section 3.4, but I want to extend this discussion here and include a Minimalist analysis and one from Dependency Grammar.

The task of any passive analysis is to explain the difference in argument realization in examples like (1):

	- b. He was beaten.

In these examples about chess, the accusative object of *beat* is realized as the nominative in (1b). In addition, it can be observed that the position of the elements is different: while *him* is realized postverbally in object position in (1a), it is realized preverbally in (1b). In GB this is explained by a movement analysis. It is assumed that the object does not get case in passive constructions and hence has to move into the subject position where case is assigned by the finite verb. This analysis is also assumed in Minimalist work as in David Adger's textbook (2003), for instance. Figure 20.1 on the following page shows his analysis of (2):

(2) Jason was killed.

TP stands for Tense Phrase and corresponds to the IP that was discussed in Chapter 3. PassP is a functional head for passives. *v*P is a special category for the analysis of verb phrases that was originally introduced for the analysis of ditransitives (Larson 1988) and VP is the normal VP that consists of verb and object. In Adger's analysis, the verb *kill* moves from the verb position in VP to the head position of *v*, the passive auxiliary *be* moves from the head position of PassP to the head position of the Tense Phrase. Features like Infl are 'checked' in combination with such movements. The exact implementation of these checking and valuing operations does not matter here. What is important is that *Jason* moves from the object position to a position that was formerly known as the 20 Extraction, scrambling, and passive: one or several descriptive devices?

Figure 20.1: Adger's Minimalist movement-based analysis of the passive (p. 231)

specifier position of T (see Footnote 28 on page 161 on the notion of specifier). All these analyses assume that the participle cannot assign accusative to its object and that the object has to move to another position to get case or check features. How exactly one can formally represents the fact that the participle cannot assign case is hardly ever made explicit in the GB literature. The following is a list of statements that can be found in the literature:

	- b. das Objekt des Aktivsatzes wird zum Subjekt des Passivsatzes, weil die passivische Verbform keinen Akkusativ-Kasus regieren kann (Akk-Kasus-Absorption). (Lohnstein 2014: 172)

In addition, it is sometimes said that the external theta-role is absorbed by the verb morphology (Jaeggli 1986; Haegeman 1994: 183). Now, what would it entail if we made this explicit? There is some lexical item for verbs like *beat*. The active form has the ability to assign accusative to its object, but the passive form does not. Since this is a property that is shared by all transitive verbs (by definition of the term transitive verb), this is some regularity that has to be captured. One way to capture this is the assumption of a special passive morpheme that suppresses the agent and changes something in the case specification of the stem it attaches too. How this works in detail was never made explicit. Let us compare this morpheme-based analysis with lexical rule-based analyses: as was explained in Section 19.5, empty heads can be used instead of lexical rules in those cases in which the phonological form of the input and the output do not differ. So for example, lexical rules that license additional arguments as in resultative constructions, for instance, can be replaced by an empty head. However, as was explained in Section 9.2, lexical rules are also used to model morphology. This is also true for Construction Gram-

mar (see Gert Booij's work on Construction Morphology (2010), which is in many ways similar to Riehemann's work in HPSG (1993, 1998)). In the case of the passive lexical rule, the participle morphology is combined with the stem and the subject is suppressed in the corresponding valence list. This is exactly what is described in the GB/MP literature. The respective lexical rule for the analysis of *ge-lieb-t* 'loved' is depicted in Figure 20.2 to the left. The morpheme-based analysis is shown to the right. To keep things simple, I

Figure 20.2: Lexical rule-based/constructionist vs. morpheme-based analysis

assume a flat analysis, but those who insist on binary branching structures would have to come up with a way of deciding whether the *ge*- or the -*t* is combined first with the stem and in which way selection and percolation of features takes place. Independent of how morphology is done, the fact that the inflected form (the top node in both figures) has different properties than the verb stem has to be represented somehow. In the morpheme-based world, the morpheme is responsible for suppressing the agent and changing the case assignment properties, in the lexical rule/construction world this is done by the respective lexical rule. There is no difference in terms of needed tools and necessary stipulations.

The situation in Minimalist theories is a little bit different. For instance, (Adger 2003: 229, 231) writes the following:

Passives are akin to unaccusatives, in that they do not assign accusative case to their object, and they do not appear to have a thematic subject. […] Moreover, the idea that the function of this auxiliary is to select an unaccusative little *v*P simultaneously explains the lack of accusative case and the lack of a thematic subject. (Adger 2003: 229, 231)

So this is an explicit statement. The relation between a stem and a passive participle form that was assumed in GB analyses is now a verb stem that is combined with two different versions of little *v*. Which *v* is chosen is determined by the governing head, a functional Perf head or a Pass head. This can be depicted as in Figure 20.3 on the following page. When *kill* is used in the perfect or the passive, it is spelled out as *killed*. If it is used in the active with a 3rd person singular subject it is spelled out as *kills*. This can be compared with a lexical analysis, for instance the one assumed in HPSG. The analysis is shown in Figure 20.4 on the next page. The left figure shows a lexical item that is licensed by a lexical rule that is applied to the stem *kill*-. The stem has two elements in its argument structure list and for the active forms the complete argument structure list


Figure 20.4: Lexical rule-based analysis of the perfect and the passive in HPSG

is shared between the licensed lexical item and the stem. The first element of the arg-st list is mapped to spr and the other elements to comps (in English). Passive is depicted in the right figure: the first element of the arg-st with structural case is suppressed and since the element that was the second element in the arg-st list of the stem ( 2 ) is now the first element, this item is mapped to spr. See Section 9.2 for passive in HPSG and Section 9.1.1 for comments on arg-st and the differences between German and English.

The discussion of Figures 20.3 and 20.4 are a further illustration of a point made in Section 19.5: lexical rules can be replaced by empty heads and vice versa. While HPSG says there are stems that are related to inflected forms and corresponding to the inflection the arguments are realized in a certain way, Minimalist theories assume two variants of little *v* that differ in their selection of arguments. Now, the question is: are there empirical differences between the two approaches? I think there are differences if one considers the question of language acquisition. What children can acquire from data is that there are various inflected forms and that they are related somehow. What remains questionable is whether they really would be able to detect empty little *v*s. One could claim of course that children operate with chunks of structures such as the ones in Figure 20.3. But then a verb would be just a chunk consisting of little *v* and V and having some open slots. This would be indistinguishable from what the HPSG analysis assumes.

As far as the "lexical rules as additional tool" aspect is concerned, the discussion is closed, but note that the standard GB/Minimalism analyses differ in another way from LFG and HPSG analyses, since they assume that passive has something to do with move-

ment, that is, they assume that the same mechanisms are used that are used for nonlocal dependencies.<sup>1</sup> This works for languages like English in which the object has to be realized in postverbal position in the active and in preverbal position in the passive, but it fails for languages like German in which the order of constituents is more free. Lenerz (1977: Section 4.4.3) discussed the examples in (44) on page 112 – which are repeated here as (4) for convenience:

	- b. weil because dem the.dat Jungen boy der the.nom Ball ball geschenkt given wurde was
	- c. weil because der the.nom Ball ball dem the.dat Jungen boy geschenkt given wurde was 'because the ball was given to the boy'

While both orders in (4b) and (4c) are possible, the one with dative–nominative order in (4b) is the unmarked one. There is a strong linearization preference in German demanding that animate NPs be serialized before inanimate ones (Hoberg 1981: 46). This linearization rule is unaffected by passivization. Theories that assume that passive is movement either have to assume that the passive of (4a) is (4c) and (4b) is derived from (4c) by a further reordering operation (which would be implausible since usually one assumes that more marked constructions require more transformations), or they would have to come up with other explanations for the fact that the subject of the passive sentence has the same position as the object in active sentences. As was already explained in Section 3.4, one such explanation is to assume an empty expletive subject that is placed in the position where nominative is assigned and to somehow connect this expletive element to the subject in object position. While this somehow works, it should be clear that the price for rescuing a movement-based analysis of passive is rather high: one has to assume an empty expletive element, that is, something that neither has a form nor a meaning. The existence of such an object could not be inferred from the input unless it is assumed that the structures in which it is assumed are given. Thus, a rather rich UG would have to be assumed.

The question one needs to ask here is: why does the movement-based analysis have these problems and why does the valence-based analysis not have them? The cause of the problem is that the analysis of the passive mixes two things: the fact that SVO languages like English encode subjecthood positionally, and the fact that the subject is suppressed in passives. If these two things are separated the problem disappears. The fact that the object of the active sentence in (1a) is realized as the subject in (1b) is explained

<sup>1</sup> There is another option in Minimalist theories. Since Agree can check features nonlocally, T can assign nominative to an embedded element. So, in principle the object may get nominative in the VP without moving to T. However, Adger (2003: 368) assumes that German has a strong EPP feature on T, so that the underlying object has to move to the specifier of T. This is basically the old GB analysis of passive in German with all its conceptual problems and disadvantages.

by the assumption that the first NP on the argument structure list with structural case is realized as subject and mapped to the respective valence feature: spr in English. Such mappings can be language specific (see Section 9.1.1 and Müller (2023b) where I discuss Icelandic, which is an SVO language with subjects with lexical case).

In what follows, I discuss another set of examples that are sometimes seen as evidence for a movement-based analysis. The examples in (5) are instances of the so-called remote passive (Höhle 1978: 175–176).<sup>2</sup>

	- b. weil because der the Wagen car.nom oft often zu to reparieren repair versucht tried wurde was 'because many attempts were made to repair the car'

What is interesting about these examples is that the subject is the underlying object of a deeply embedded verb. This seems to suggest that the object is extracted out of the verb phrase. So the analysis of (5b) would be (6):

(6) weil because [IP der the Wagen car.nom [VP oft often [VP [VP [VP \_ zu to reparieren] repair versucht] tried wurde] was

While this is a straight-forward explanation of the fact that (5b) is grammatical, another explanation is possible as well. In the HPSG analysis of German (and Dutch) it is assumed that verbs like those in (5b) form a verbal complex, that is, *zu reparieren versucht wurde* 'to repair tried was' forms one unit. When two or more verbs form a complex, the highest verb attracts the arguments from the verb it embeds (Hinrichs & Nakazawa 1989b, 1994a, Bouma & van Noord 1998). A verb like *versuchen* 'to try' selects a subject, an infinitive with *zu* 'to' and all complements that are selected by this infinitive. In the analysis of (7), *versuchen* 'to try' selects for its subject, the object of *reparieren* 'to repair' and for the verb *zu reparieren* 'to repair'.

(7) weil because er he.nom den the.acc Wagen car zu to reparieren repair versuchen try will wants 'because he wants to try to repair the car'

Now if the passive lexical rule applies to *versuch*-, it suppresses the first argument of *versuch*- with structural case, which is the subject of *versuch*-. The next argument of *versuch*- is the object of *zu reparieren*. Since this element is the first NP with structural case, it gets nominative as in (5b). So, this shows that there is an analysis of the remote passive that does not rely on movement. Since movement-based analyses were shown to be problematic and since there are no data that cannot be explained without movement, analyses without movement have to be preferred.

<sup>2</sup> See Müller (2002a: Section 3.1.4.1) and Wurmbrand (2003b) for corpus examples.

<sup>3</sup>Oppenrieder (1991: 212).

This leaves us with movement-based accounts of local reordering (scrambling). The reviewer suggested that scrambling, passive, and nonlocal extraction may be analyzed with the same mechanism. It was long thought that scope facts made the assumption of movement-based analyses of scrambling necessary, but it was pointed out by Kiss (2001: 146) and Fanselow (2001: Section 2.6) that the reverse is true: movement-based accounts of scrambling make wrong predictions with regard to available quantifier scopings. I discussed the respective examples in Section 3.5 already and will not repeat the discussion here. The conclusion that has to be drawn from this is that passive, scrambling, and long distance extraction are three different phenomena that should be treated differently. The solution for the analysis of the passive that is adopted in HPSG is based on an analysis by Haider (1986a), who worked within the GB framework. The "scramblingas-base generation" approach to local reordering that was used in HPSG right from the beginning (Gunji 1986) is also adopted by some practitioners of GB/Minimalism, e.g., Fanselow (2001).

Having discussed the analyses in GB/Minimalism, I now turn to Dependency Grammar. Groß & Osborne (2009) suggest that *w*-fronting, topicalization, scrambling, extraposition, splitting, and also the remote passive should be analyzed by what they call *rising*. The concept was already explained in Section 11.5. The Figures 20.5 and 20.6 show examples for the fronting and the scrambling of an object. Groß and Osborne as-

Figure 20.5: Analysis of *Die Idee wird jeder verstehen.* 'Everybody will understand the idea.' involving rising

sume that the object depends on the main verb in sentences with auxiliary verbs, while the subject depends on the auxiliary. Therefore, the object *die Idee* 'the idea' and the object *sich* 'himself' have to rise to the next higher verb in order to keep the structures projective. Figure 20.7 on the following page shows the analysis of the remote passive. The object of *zu reparieren* 'to repair' rises to the auxiliary *wurde* 'was'.

Groß and Osborne use the same mechanism for all these phenomena, but it should be clear that there have to be differences in the exact implementation. Groß and Osborne say that English does not have scrambling, while German does. If this is to be captured, there must be a way to distinguish the two phenomena, since if this were not possible, one would predict that English has scrambling as well, since both German and English 20 Extraction, scrambling, and passive: one or several descriptive devices?

Figure 20.6: Analysis of *Gestern hat sich der Spieler verletzt.* 'Yesterday, the player injured himself.' involving rising of the object of the main verb *verletzt* 'injured'

Figure 20.7: Analysis of the remote passive *dass der Wagen zu reparieren versucht wurde* 'that it was tried to repair the car' involving rising

allow long distance fronting. Groß & Osborne (2009: 58) assume that object nouns that rise must take the nominative. But if the kind of rising that they assume for remote passives is identical to the one that they assume for scrambling, they would predict that *den Wagen* gets nominative in (8) as well:

(8) dass that den the.acc Wagen car niemand nobody.nom repariert repaired hat has 'that nobody repaired the car'

Since *den Wagen* 'the car' and *repariert* 'repaired' are not adjacent, *den Wagen* has to rise to the next higher head in order to allow for a projective realization of elements. So in order to assign case properly, one has to take into account the arguments that are governed by the head to which a certain element rises. Since the auxiliary *hat* 'has'

already governs a nominative, the NP *den Wagen* has to be realized in the accusative. An analysis that assumes that both the accusative and nominative depend on *hat* 'has' in (8) is basically the verbal complex analysis assumed in HPSG and some GB variants.

Note, however, that this does not extend to nonlocal dependencies. Case is assigned locally by verbs or verbal complexes, but not to elements that come from far away. The long distance extraction of NPs is more common in southern variants of German and there are only a few verbs that do not take a nominative argument themselves. The examples below involve *dünken* 'to think', which governs an accusative and a sentential object and *scheinen* 'to seem', which governs a dative and a sentential object. If (9a) is analyzed with *den Wagen* rising to *dünkt*, one might expect that *den Wagen* 'the car' gets nominative since there is no other element in the nominative. However, (8b) is entirely out.

	- b. \* Der the.nom Wagen car dünkt thinks mich, me.acc dass that er he.nom repariert. repairs

Similarly there is no agreement between the fronted element and the verb to which it attaches:



This shows that scrambling/remote passive and extraction should not be dealt with by the same mechanism or if they are dealt with by the same mechanism one has to make sure that there are specialized variants of the mechanism that take the differences into account. I think what Groß and Osborne did is simply recode the attachment relations of phrase structure grammars. *die Idee* 'the idea' has some relation to *wird jeder verstehen* 'will everybody understand' in Figure 20.5, as it does in GB, LFG, GPSG, HPSG, and other similar frameworks. In HPSG, *die Idee* 'the idea' is the filler in a filler-head configuration. The remote passive and local reorderings of arguments of auxiliaries, modal verbs, and other verbs that behave similarly are explained by verbal complex formation where all non-verbal arguments depend on the highest verb (Hinrichs & Nakazawa 1994a).

#### 20 Extraction, scrambling, and passive: one or several descriptive devices?

Concluding this chapter, it can be said that local reorderings and long-distance dependencies are two different things that should be described with different tools (or there should be further constraints that differ for the respective phenomena when the same tool is used). Similarly, movement-based analyses of the passive are problematic since passive does not necessarily imply reordering.

# **21 Phrasal vs. lexical analyses**

coauthored with Stephen Wechsler

This section deals with a rather crucial aspect when it comes to the comparison of the theories described in this book: valence and the question whether sentence structure, or rather syntactic structure in general, is determined by lexical information or whether syntactic structures have an independent existence (and meaning) and lexical items are just inserted into them. Roughly speaking, frameworks like GB/Minimalism, LFG, CG, HPSG, and DG are lexical, while GPSG and Construction Grammar (Goldberg 1995, 2003a, Tomasello 2003, 2006b, Croft 2001) are phrasal approaches. This categorization reflects tendencies, but there are non-lexical approaches in Minimalism (Borer's exoskeletal approach, 2003) and LFG (Alsina 1996; Asudeh et al. 2008, 2013) and there are lexical approaches in Construction Grammar (Sign-Based Construction Grammar, see Section 10.6.2). The phrasal approach is wide-spread also in frameworks like Cognitive Grammar (Dąbrowska 2001; Langacker 2009: 169) and Simpler Syntax (Culicover & Jackendoff 2005, Jackendoff 2008) that could not be discussed in this book.

The question is whether the meaning of an utterance like (1a) is contributed by the verb *give* and the structure needed for the NPs occurring together with the verb does not contribute any meaning or whether there is a phrasal pattern [X Verb Y Z] that contributes some "ditransitive meaning" whatever this may be.<sup>1</sup>

	- b. Peter fishes the pond empty.

Similarly, there is the question of how the constituents in (1b) are licensed. This sentence is interesting since it has a resultative meaning that is not part of the meaning of the verb *fish*: Peter's fishing causes the pond to become empty. Nor is this additional meaning part of the meaning of any other item in the sentence. On the lexical account, there is a lexical rule that licenses a lexical item that selects for *Peter*, *the pond*, and *empty*. This lexical item also contributes the resultative meaning. On the phrasal approach, it is

	- b. Er he.nom stiehlt steals ihr her.dat den the.acc Ball. ball 'He steals the ball from her.'

<sup>1</sup>Note that the prototypical meaning is a transfer of possession in which Y receives Z from X, but the reverse holds in (i.b):

assumed that there is a pattern [Subj V Obj Obl]. This pattern contributes the resultative meaning, while the verb that is inserted into this pattern just contributes its prototypical meaning, e.g., the meaning that *fish* would have in an intransitive construction. I call such phrasal approaches *plugging approaches*, since lexical items are plugged into readymade structures that do most of the work.

In what follows I will examine these proposals in more detail and argue that the lexical approaches to valence are the correct ones. The discussion will be based on earlier work of mine (Müller 2006, 2007b, 2010b) and work that I did together with Steve Wechsler (Müller & Wechsler 2014a,b). Some of the sections in Müller & Wechsler (2014a) started out as translations of Müller (2013a), but the material was reorganized and refocused due to intensive discussion with Steve Wechsler. So rather than using a translation of Section 11.11 of Müller (2013a), I use parts of Müller & Wechsler (2014a) here and add some subsections that had to be left out of the article due to space restrictions (Subsections 21.3.6 and 21.7.3). Because there have been misunderstandings in the past (e.g., Boas (2014), see Müller & Wechsler (2014b)), a disclaimer is necessary here: this section is not an argument against Construction Grammar. As was mentioned above Sign-Based Construction Grammar is a lexical variant of Construction Grammar and hence compatible with what I believe to be correct. This section is also not against phrasal constructions in general, since there are phenomena that seem to be best captured with phrasal constructions. These are discussed in detail in Subsection 21.10. What I will argue against in the following subsections is a special kind of phrasal construction, namely phrasal argument structure constructions (phrasal ASCs). I believe that all phenomena that have to do with valence and valence alternations should be treated lexically.

# **21.1 Some putative advantages of phrasal models**

In this section we examine certain claims to purported advantages of phrasal versions of Construction Grammar over lexical rules. Then in the following section, we will turn to positive arguments for lexical rules.

# **21.1.1 Usage-based theories**

For many practitioners of Construction Grammar, their approach to syntax is deeply rooted in the ontological strictures of *usage-based* theories of language (Langacker 1987, Goldberg 1995, Croft 2001, Tomasello 2003). Usage-based theorists oppose the notion of "linguistic rules conceived of as algebraic procedures for combining symbols that do not themselves contribute to meaning" (Tomasello 2003: 99). All linguistic entities are symbolic of things in the realm of denotations; "all have communicative significance because they all derive directly from language use" (*ibid*). Although the formatives of language may be rather abstract, they can never be divorced from their functional origin as a tool of communication. The usage-based view of constructions is summed up well in the following quote:

The most important point is that constructions are nothing more or less than patterns of usage, which may therefore become relatively abstract if these patterns

include many different kinds of specific linguistic symbols. But never are they empty rules devoid of semantic content or communicative function. (Tomasello 2003: 100)

Thus constructions are said to differ from grammatical rules in two ways: they must carry meaning; and they reflect the actual "patterns of usage" fairly directly.

Consider first the constraint that every element of the grammar must carry meaning, which we call the *semiotic dictum*. Do lexical or phrasal theories hew the most closely to this dictum? Categorial Grammar, the paradigm of a lexical theory (see Chapter 8), is a strong contender: it consists of meaningful words, with only a few very general combinatorial rules such as X/Y ∗ Y = X. Given the rule-to-rule assumption, those combinatorial rules specify the meaning of the whole as a function of the parts. Whether such a rule counts as meaningful in itself in Tomasello's sense is not clear.

What does seem clear is that the combinatorial rules of Construction Grammar, such as Goldberg's Correspondence Principle for combining a verb with a construction (1995: 50), have the same status as those combinatorial rules:

(2) The Correspondence Principle: each participant that is lexically profiled and expressed must be fused with a profiled argument role of the construction. If a verb has three profiled participant roles, then one of them may be fused with a nonprofiled argument role of a construction. (Goldberg 1995: 50)

Both verbs and constructions are specified for participant roles, some of which are *profiled*. Argument profiling for verbs is "lexically determined and highly conventionalized" (Goldberg 1995: 46). Profiled argument roles of a construction are mapped to direct grammatical functions, i. e., SUBJ, OBJ, or OBJ2. By the Correspondence Principle the lexically profiled argument roles must be direct, unless there are three of them, in which case one may be indirect.<sup>2</sup> With respect to the semiotic dictum, the Correspondence Principle has the same status as the Categorial Grammar combinatorial rules: a meaningless algebraic rule that specifies the way to combine meaningful items.

Turning now to the lexicalist syntax we favor, some elements abide by the semiotic dictum while others do not. Phrase structure rules for intransitive and transitive VPs (or the respective HPSG ID schema) do not. Lexical valence structures clearly carry meaning since they are associated with particular verbs. In an English ditransitive, the first object expresses the role of "intended recipient" of the referent of the second object. Hence *He carved her a toy* entails that he carved a toy with the intention that she receive it. So the lexical rule that adds a benefactive recipient argument to a verb adds meaning. Alternatively, a phrasal ditransitive construction might contribute that "recipient" meaning.<sup>3</sup> Which structures have meaning is an empirical question for us.

In Construction Grammar, however, meaning is assumed for all constructions *a priori*. But while the ditransitive construction plausibly contributes meaning, no truth-

<sup>2</sup>We assume that the second sentence of (2) provides for exceptions to the first sentence.

<sup>3</sup> In Section 21.2.1 we argue that the recipient should be added in the lexical argument structure, not through a phrasal construction. See Wechsler (1991: 111–113; 1995: 88–89) for an analysis of English ditransitives with elements of both constructional and lexical approaches. It is based on Kiparsky's notion of a *thematically restricted positional linker* (1987, 1988).

#### 21 Phrasal vs. lexical analyses

conditional meaning has yet been discovered for either the intransitive or bivalent transitive constructions. Clearly the constructionist's evidence for the meaningfulness of*certain* constructions such as the ditransitive does not constitute evidence that *all* phrasal constructions have meaning. So the lexical and phrasal approaches seem to come out the same, as far as the semiotic dictum is concerned.

Now consider the second usage-based dictum, that the elements of the grammar directly reflect patterns of usage, which we call *the transparency dictum*. The Construction Grammar literature often presents their constructions informally in ways that suggest that they represent surface constituent order patterns: the transitive construction is "[X VERB Y]" (Tomasello) or "[Subj V Obj]" (Goldberg 1995, 2006)<sup>4</sup> ; the passive construction is "X *was* VERB*ed by* Y" (Tomasello 2003: 100) or "Subj aux Vpp (PPby)" (Goldberg 2006: 5). But a theory in which constructions consist of surface patterns was considered in detail and rejected by Müller (2006: Section 2), and does not accurately reflect Goldberg's actual theory.<sup>5</sup> The more detailed discussions present *argument structure constructions*, which are more abstract and rather like the lexicalists' grammatical elements (or perhaps an LFG f-structure): the transitive construction resembles a transitive valence structure (minus the verb itself); the passive construction resembles the passive lexical rule.

With respect to fulfilling the desiderata of usage-based theorists, we do not find any significant difference between the non-lexical and lexical approaches.

### **21.1.2 Coercion**

Researchers working with plugging proposals usually take coercion as an indication of the usefulness of phrasal constructions. For instance, Anatol Stefanowitsch (Lecture in the lecture series *Algorithmen und Muster* –*- Strukturen in der Sprache*, 2009) discussed the example in (3):

(3) Das Tor zur Welt Hrnglb öffnete sich ohne Vorwarnung und verschlang [sie] … die Welt Hrnglb wird von Magiern erschaffen, die Träume zu Realität formen können, aber nicht in der Lage sind zu träumen. Haltet aus, Freunde. Und ihr da draußen, bitte träumt ihnen ein Tor.<sup>6</sup>

The crucial part is *bitte träumt ihnen ein Tor* 'Dream a gate for them'. In this fantasy context the word *träumen*, which is intransitive, is forced into the ditransitive construction and therefore gets a certain meaning. This forcing of a verb corresponds to overwriting or rather extending properties of the verb by the phrasal construction.

<sup>4</sup>Goldberg et al. (2004: 300) report about a language acquisition experiment that involves an SOV pattern. The SOV order is mentioned explicitly and seen as part of the construction.

<sup>5</sup> This applies to argument structure constructions only. In some of her papers Goldberg assumes that very specific phrase structural configurations are part of the constructions. For instance in her paper on complex predicates in Persian (Goldberg 2003b) she assigns V<sup>0</sup> and V categories. See Müller (2010b: Section 4.9) for a critique of that analysis.

<sup>6</sup> http://www.elbenwaldforum.de/showflat.php?Cat=&Board=Tolkiens\_Werke&Number=1457418&page= 3&view=collapsed&sb=5&o=&fpart=16, 2010-02-27.

<sup>&#</sup>x27;The gate to the world Hrnglb opened without warning and swallowed them. The world Hrnglb is created by magicians that can form reality from dreams but cannot dream themselves. Hold out, friends! And you out there, please, dream a gate for them.'

In cases in which the plugging proposals assume that information is overwritten or extended, lexical approaches assume mediating lexical rules. Briscoe & Copestake (1999: Section 4) have worked out a lexical approach in detail.<sup>7</sup> They discuss the ditransitive sentences in (4), which either correspond to the prototypical ditransitive construction (4a) or deviate from it in various ways.

	- b. Joe painted Sally a picture.
	- c. Mary promised Joe a new car.
	- d. He tipped Bill two pounds.
	- e. The medicine brought him relief.
	- f. The music lent the party a festive air.
	- g. Jo gave Bob a punch.
	- h. He blew his wife a kiss.
	- i. She smiled herself an upgrade.<sup>8</sup>

For the non-canonical examples they assume lexical rules that relate transitive (*paint*) and intransitive (*smile*) verbs to ditransitive ones and contribute the respective semantic information or the respective metaphorical extension. The example in (4i) is rather similar to the *träumen* example discussed above and is also analyzed with a lexical rule (page 509). Briscoe and Copestake note that this lexical rule is much more restricted in its productivity than the other lexical rules they suggest. They take this as motivation for developing a representational format in which lexical items (including those that are derived by lexical rules) are associated with probabilities, so that differences in productivity of various patterns can be captured.

Looking narrowly at such cases, it is hard to see any rational grounds for choosing between the phrasal analysis and the lexical rule. But if we broaden our view, the lexical rule approach can be seen to have a much wider application. Coercion is a very general pragmatic process, occurring in many contexts where no construction seems to be responsible (Nunberg 1995). Nunberg cites many cases such as the restaurant waiter asking *Who is the ham sandwich?* (Nunberg 1995: 115). Copestake & Briscoe (1992: 116) discuss the conversion of terms for animals to mass nouns (see also Copestake & Briscoe (1995: 36–43)). Example (5) is about a substance, not about a cute bunny.

(5) After several lorries had run over the body, there was rabbit splattered all over the road.

The authors suggest a lexical rule that maps a count noun onto a mass noun. This analysis is also assumed by Fillmore (1999: 114–115). Such coercion can occur without any syntactic context: one can answer the question *What's that stuff on the road?* or *What are you eating?* with the one-word utterance *Rabbit.* Some coercion happens to affect

<sup>7</sup>Kay (2005), working in the framework of CxG, also suggests unary constructions.

<sup>8</sup>Douglas Adams. 1979. *The Hitchhiker*'*s Guide to the Galaxy*, Harmony Books. Quoted from Goldberg (2003a: 220).

the complement structure of a verb, but this is simply a special case of a more general phenomenon that has been analyzed by rules of systematic polysemy.

# **21.1.3 Aspect as a clause level phenomenon**

Alsina (1996), working in the framework of LFG, argues for a phrasal analysis of resultative constructions based on the aspectual properties of sentences, since aspect is normally viewed as a property that is determined by the sentence syntax. Intransitive verbs such as *bark* refer to activities, a resultative construction with the same verb, however, stands for an accomplishment (an extended change of state). Alsina supports this with the following data:

	- b. The dog barked the neighbors awake in five minutes.

The latter sentence means that the *barking* event was completed after five minutes. A reading referring to the time span of the event is not available for (6a). If (6a) is grammatical at all, then a claim is being made about the time frame in which the event begun.

If we now consider examples such as (7c), however, we see that Alsina's argumentation is not cogent since the resultative meaning is already present at the word-level in nominalizations. As the examples in (7) show, this contrast can be observed in nominal constructions and is therefore independent of the sentence syntax:

	- b. # weil because sie they in in fünf five Jahren years fischten fished
	- c. das the Leerfischen empty.fishing der of.the Nordsee North.Sea in in fünf five Jahren years
	- d. # das the Fischen fishing in in fünf five Jahren years

In a lexical approach there is a verb stem selecting for two NPs and a resultative predicate. This stem has the appropriate meaning and can be inflected or undergo derivation und successive inflection. In both cases we get words that contain the resultative semantics and hence are compatible with respective adverbials.

# **21.1.4 Simplicity and polysemy**

Much of the intuitive appeal of the plugging approach stems from its apparent simplicity relative to the use of lexical rules. But the claim to greater simplicity for Construction Grammar is based on misunderstandings of both lexical rules and Construction Grammar (specifically of Goldberg's (1995, 2006) version). It draws the distinction in the

wrong place and misses the real differences between these approaches. This argument from simplicity is often repeated and so it is important to understand why it is incorrect.

Tomasello (2003) presents the argument as follows. Discussing first the lexical rules approach, Tomasello (2003: 160) writes that

One implication of this view is that a verb must have listed in the lexicon a different meaning for virtually every different construction in which it participates […]. For example, while the prototypical meaning of *cough* involves only one participant, the cougher, we may say such things as *He coughed her his cold*, in which there are three core participants. In the lexical rules approach, in order to produce this utterance the child's lexicon must have as an entry a ditransitive meaning for the verb *cough*. (Tomasello 2003: 160)

Tomasello (2003: 160) then contrasts a Construction Grammar approach, citing Fillmore et al. (1988), Goldberg (1995), and Croft (2001). He concludes as follows:

The main point is that if we grant that constructions may have meaning of their own, in relative independence of the lexical items involved, then we do not need to populate the lexicon with all kinds of implausible meanings for each of the verbs we use in everyday life. The construction grammar approach in which constructions have meanings is therefore both much simpler and much more plausible than the lexical rules approach. (Tomasello 2003: 161)

This reflects a misunderstanding of lexical rules, as they are normally understood. There is no implausible sense populating the lexicon. The lexical rule approach to *He coughed her his cold* states that when the word *coughed* appears with two objects, the whole complex has a certain meaning (see Müller 2006: 876). Furthermore we explicitly distinguish between listed elements (lexical entries) and derived ones. The general term subsuming both is *lexical item*.

The simplicity argument also relies on a misunderstanding of a theory Tomasello advocates, namely the theory due to Goldberg (1995, 2006). For his argument to go through, Tomasello must tacitly assume that verbs can combine freely with constructions, that is, that the grammar does not place extrinsic constraints on such combinations. If it is necessary to also stipulate which verbs can appear in which constructions, then the claim to greater simplicity collapses: each variant lexical item with its "implausible meaning" under the lexical rule approach corresponds to a verb-plus-construction combination under the phrasal approach.

Passages such as the following may suggest that verbs and constructions are assumed to combine freely:<sup>9</sup>

Constructions are combined freely to form actual expressions as long as they can be construed as not being in conflict (invoking the notion of construal is intended

<sup>9</sup> The context of these quotes makes clear that the verb and the argument structure construction are considered constructions. See Goldberg (2006: 21, ex. (2)).

to allow for processes of accommodation or coercion). […] Allowing constructions to combine freely as long as there are no conflicts, allows for the infinitely creative potential of language. […] That is, a speaker is free to creatively combine constructions as long as constructions exist in the language that can be combined suitably to categorize the target message, given that there is no conflict among the constructions. (Goldberg 2006: 22)

But in fact Goldberg does not assume free combination, but rather that a verb is "conventionally associated with a construction" (Goldberg 1995: 50): verbs specify their participant roles and which of those are obligatory direct arguments (*profiled*, in Goldberg's terminology). In fact, Goldberg herself (2006: 211) argues against Borer's putative assumption of free combination (2003) on the grounds that Borer is unable to account for the difference between *dine* (intransitive), *eat* (optionally transitive), and *devour* (obligatorily transitive).<sup>10</sup> Despite Tomasello's comment above, Construction Grammar is no simpler than the lexical rules.

The resultative construction is often used to illustrate the simplicity argument. For example, Goldberg (1995: Chapter 7) assumes that the same lexical item for the verb *sneeze* is used in (8a) and (8b). It is simply inserted into different constructions:

	- b. He sneezed the napkin off the table.

The meaning of (8a) corresponds more or less to the verb meaning, since the verb is used in the Intransitive Construction. But the Caused-Motion Construction in (8b) contributes additional semantic information concerning the causation and movement: his sneezing caused the napkin to move off the table. *sneeze* is plugged into the Caused-Motion Construction, which licenses the subject of *sneeze* and additionally provides two slots: one for the theme (*napkin*) and one for the goal (*off the table*). The lexical approach is essentially parallel, except that the lexical rule can feed further lexical processes like passivization (*The napkin was sneezed off the table*), and conversion to nouns or adjectives (see Sections 21.2.2 and 21.6).

In a nuanced comparison of the two approaches, Goldberg (1995: 139–140) considers again the added recipient argument in *Mary kicked Joe the ball*, where *kick* is lexically a 2-place verb. She notes that on the constructional view, "the composite fused structure involving both verb and construction is stored in memory". The verb itself retains its original meaning as a 2-place verb, so that "we avoid implausible verb senses such as 'to cause to receive by kicking'." The idea seems to be that the lexical approach, in contrast, must countenance such implausible verb senses since a lexical rule adds a third argument.

But the lexical and constructional approaches are actually indistinguishable on this point. The lexical rule does not produce a verb with the "implausible sense" in (9a). Instead it produces the sense in (9b):

<sup>10</sup>Goldberg's critique cites a 2001 presentation by Borer with the same title as Borer (2003). See Section 21.3.4 for more discussion of this issue. As far as we know, the *dine / eat / devour* minimal triplet originally came from Dowty (1989: 89–90).

	- b. cause(kick(x, y),receive(z,y))

The same sort of "composite fused structure" is assumed under either view. With respect to the semantic structure, the number and plausibility of senses, and the polyadicity of the semantic relations, the two theories are identical. They mainly differ in the way this representation fits into the larger theory of syntax. They also differ in another respect: on the lexical view, the derived three-argument valence structure is associated with the phonological string *kicked*. Next, we present evidence for this claim.

# **21.2 Evidence for lexical approaches**

### **21.2.1 Valence and coordination**

On the lexical account, the verb *paint* in (4b), for example, is lexically a 2-argument verb, while the unary branching node immediately dominating it is effectively a 3-argument verb. On the constructional view there is no such predicate seeking three arguments that dominates only the verb. Coordination provides evidence for the lexical account.

A generalization about coordination is that two constituents which have compatible syntactic properties can be coordinated and that the result of the coordination is an object that has the syntactic properties of each of the conjuncts. This is reflected by the Categorial Grammar analysis which assumes the category (X\X)/X for the conjunction: the conjunction takes an X to the right, an X to the left and the result is an X.

For example, in (10a) we have a case of the coordination of two lexical verbs. The coordination *know and like* behaves like the coordinated simplex verbs: it takes a subject and an object. Similarly, two sentences with a missing object are coordinated in (10b) and the result is a sentence with a missing object.

	- b. Bagels, I like and Ellison hates.

The German examples in (11) show that the case requirement of the involved verbs has to be respected. In (11b,c) the coordinated verbs require accusative and dative respectively and since the case requirements are incompatible with unambiguously case marked nouns both of these examples are out.

	- b. \* Ich kenne und helfe diesen Mann.
		- I know and help this.acc man
	- c. \* Ich kenne und helfe diesem Mann.
		- I know and help this.dat man

Interestingly, it is possible to coordinate basic ditransitive verbs with verbs that have additional arguments licensed by the lexical rule. (12) provides examples in English and German ((12b) is quoted from Müller (2013a: 420)):

	- b. ich I hab have ihr her jetzt now diese this Ladung load Muffins Muffins mit with den the Herzchen little.heart drauf there.on gebacken baked und and gegeben.<sup>12</sup> given 'I have now baked and given her this load of muffins with the little heart on

top.'

These sentences show that both verbs are 3-argument verbs at the 0 level, since they involve 0 coordination:

(13) [<sup>V</sup> <sup>0</sup> offered and made] [NP me] [NP a wonderful espresso]

This is expected under the lexical rule analysis but not the non-lexical constructional one.<sup>13</sup>

Summarizing the coordination argument: coordinated verbs generally must have compatible syntactic properties like valence properties. This means that in (12b), for example, *gebacken* 'baked' and *gegeben* 'given' have the same valence properties. On the lexical approach the creation verb *gebacken*, together with a lexical rule, licenses a ditransitive verb. It can therefore be coordinated with *gegeben*. On the phrasal approach however, the verb *gebacken* has two argument roles and is not compatible with the verb *gegeben*, which has three argument roles. In the phrasal model, *gebacken* can only realize three arguments when it enters the ditransitive phrasal construction or argument structure construction. But in sentences like (12) it is not *gebacken* alone that enters the phrasal syntax, but rather the combination of *gebacken* and *gegeben*. On this view, the verbs are incompatible as far as the semantic roles are concerned.

To fix this under the phrasal approach, one could posit a mechanism such that the semantic roles that are required for the coordinate phrase *baked and given* are shared by each of its conjunct verbs and that they are therefore compatible. But this would amount to saying that there are several verb senses for *baked*, something that the anti-lexicalists claim to avoid, as discussed in the next section.

(i) She [ offered \_\_\_ ] and [ made me \_\_\_ ] a wonderful espresso.

<sup>11</sup>http://www.thespinroom.com.au/?p=102, 2012-07-07.

<sup>12</sup>http://www.musiker-board.de/diverses-ot/35977-die-liebe-637-print.html, 2012-06-08.

<sup>13</sup>One might wonder whether these sentences could be instances of Right Node Raising (RNR) out of coordinated VPs (Bresnan 1974, Abbott 1976):

But this cannot be correct. Under such an analysis the first verb has been used without a benefactive or recipient object. But *me* is interpreted as the recipient of both the offering and making. Secondly, the second object can be an unstressed pronoun (*She offered and made me it*), which is not possible in RNR. Note that *offered and made* cannot be a pseudo-coordination meaning 'offered to make'. This is possible only with stem forms of certain verbs such as *try*.

A reviewer of Theoretical Linguistics correctly observes that a version of the (phrasal) ASC approach could work in the exactly same way as our lexical analysis. Our ditransitive lexical rule would simply be rechristened as a "ditransitive ASC". This construction would combine with *baked*, thus adding the third argument, prior to its coordination with *gave*. As long as the ASC approach is a non-distinct notational variant of the lexical rule approach then of course it works in exactly the same way. But the literature on the ASC approach represents it as a radical alternative to lexical rules, in which constructions are combined through inheritance hierarchies, instead of allowing lexical rules to alter the argument structure of a verb prior to its syntactic combination with the other words and phrases.

The reviewer also remarked that examples like (14) show that the benefactive argument has to be introduced on the phrasal level.

(14) I designed and built him a house.

Both *designed* and *built* are bivalent verbs and *him* is the benefactive that extends both *designed* and *built*. However, we assume that sentences like (14) can be analyzed as coordination of two verbal items that are licensed by the lexical rule that introduces the benefactive argument. That is, the benefactive is introduced before the coordination.

The coordination facts illustrate a more general point. The output of a lexical rule such as the one that would apply in the analysis of *gebacken* in (12b) is just a word (an X<sup>0</sup> ), so it has the same syntactic distribution as an underived word with the same category and valence feature. This important generalization follows from the lexical account while on the phrasal view, it is mysterious at best. The point can be shown with any of the lexical rules that the anti-lexicalists are so keen to eliminate in favor of phrasal constructions. For example, active and passive verbs can be coordinated, as long as they have the same valence properties, as in this Swedish example:

(15) Golfklubben golf.club.def begärde requested och and beviljade-s granted-pass marklov ground.permit för for banbygget track.build.def efter after en a hel whole del part förhandlingar negotiations och and kompromisser compromises med with Länsstyrelsen county.board.def och and Naturvårdsverket.<sup>14</sup>

nature.protection.agency.def

'The golf club requested and was granted a ground permit for fairlane construction after a lot of negotiations and compromises with the County Board and the Environmental Protection Agency.'

(English works the same way, as shown by the grammatical translation line.) The passive of the ditransitive verb *bevilja* 'grant' retains one object, so it is effectively transitive and can be coordinated with the active transitive *begära* 'request'.

Moreover, the English passive verb form, being a participle, can feed a second lexical rule deriving adjectives from verbs. All categories of English participles can be converted to adjectives (Bresnan, 1982b, 2001: Chapter 3):

<sup>14</sup>http://www.lyckselegolf.se/klubben/kort-historik/, 25.04.2018.

	- b. active past participles (cf. The leaf has fallen): *the fallen leaf*
	- c. passive participles (cf. The toy is being broken (by the child).): *the broken toy*

That the derived forms are adjectives, not verbs, is shown by a host of properties, including negative *un-* prefixation: *unbroken* means 'not broken', just as *unkind* means 'not kind', while the *un-* appearing on verbs indicates, not negation, but action reversal, as in *untie* (Bresnan, 1982b: 21, 2001: Chapter 3). Predicate adjectives preserve the subject of predication of the verb and for prenominal adjectives the rule is simply that the role that would be assigned to the subject goes to the modified noun instead (*The toy remained (un-)broken.*; *the broken toy*). Being an 0 , such a form can be coordinated with another 0 , as in the following:

	- b. any [old, rotting, or broken] toys

In (17b), three adjectives are coordinated, one underived (*old*), one derived from a present participle (*rotting*), and one from a passive participle (*broken*). Such coordination is completely mundane on a lexical theory. Each A<sup>0</sup> conjunct has a valence feature (in HPSG it would be the spr feature for predicates or the mod feature for the prenominal modifiers), which is shared with the mother node of the coordinate structure. But the point of the phrasal (or ASC) theory is to deny that words have such valence features.

The claim that lexical derivation of valence structure is distinct from phrasal combination is further supported with evidence from deverbal nominalization (Wechsler 2008a). To derive nouns from verbs, *-ing* suffixation productively applies to all inflectable verbs (*the shooting of the prisoner*), while morphological productivity is severely limited for various other suffixes such as *-(a)tion* (*\* the shootation of the prisoner*). So forms such as *destruction* and *distribution* must be retrieved from memory while *-ing* nouns such as *looting* or *growing* could be (and in the case of rare verbs or neologisms, must be) derived from the verb or the root through the application of a rule (Zucchi 1993). This difference explains why *-ing* nominals always retain the argument structure of the cognate verb, while other forms show some variation. A famous example is the lack of the agent argument for the noun *growth* versus its retention by the noun *growing*: *\* John's growth of tomatoes* versus *John's growing of tomatoes* (Chomsky 1970).<sup>15</sup>

But what sort of rule derives the *-ing* nouns, a lexical rule or a phrasal one? In Marantz's (1997) phrasal analysis, a phrasal construction (notated as *v*P) is responsible for assigning the agent role of *-ing* nouns such as *growing*. For him, none of the words directly selects an agent via its argument structure. The *-ing* forms are permitted to appear in the *v*P construction, which licenses the possessive agent. Non-*ing* nouns such as *destruction* and *growth* do not appear in *v*P. Whether they allow expression of the agent depends on semantic and pragmatic properties of the word: *destruction* involves external causation so it does allow an agent, while *growth* involves internal causation so it does not allow an agent.

<sup>15</sup>See Section 21.3.3 for further discussion.

However, a problem for Marantz is that these two types of nouns can coordinate and share dependents (example (18a) is from Wechsler (2008a: Section 7)):

	- b. The [cultivation, growing or distribution] of medical marijuana within the County shall at all times occur within a secure, locked, and fully enclosed structure, including a ceiling, roof or top, and shall meet the following requirements.<sup>17</sup>

On the phrasal analysis, the nouns *looting* and *growing* occur in one type of syntactic environment (namely *v*P), while forms *destruction*, *cultivation*, and *distribution* occur in a different syntactic environment. This places contradictory demands on the structure of coordinations like those in (18). As far as we know, neither this problem nor the others raised by Wechsler (2008a) have even been addressed by advocates of the phrasal theory of argument structure.

Consider one last example. In an influential phrasal analysis, Hale and Keyser (1993) derived denominal verbs like *to saddle* through noun incorporation out of a structure akin to [PUT a saddle ON x]. Again, verbs with this putative derivation routinely coordinate and share dependents with verbs of other types:

(19) Realizing the dire results of such a capture and that he was the only one to prevent it, he quickly [saddled and mounted] his trusted horse and with a grim determination began a journey that would become legendary.<sup>18</sup>

As in all of these X<sup>0</sup> coordination cases, under the phrasal analysis the two verbs place contradictory demands on a single phrase structure.

A lexical valence structure is an abstraction or generalization over various occurrences of the verb in syntactic contexts. To be sure, one key use of that valence structure is simply to indicate what sort of phrases the verb must (or can) combine with, and the result of semantic composition; if that were the whole story then the phrasal theory would be viable. But it is not. As it turns out, this lexical valence structure, once abstracted, can alternatively be used in other ways: among other possibilities, the verb (crucially including its valence structure) can be coordinated with other verbs that have similar valence structures; or it can serve as the input to lexical rules specifying a new word bearing a systematic relation to the input word. The coordination and lexical derivation facts follow from the lexical view, while the phrasal theory at best leaves these facts as mysterious and at worst leads to irreconcilable contradictions for the phrase structure.

## **21.2.2 Valence and derivational morphology**

Goldberg & Jackendoff (2004), Alsina (1996), and Asudeh, Dalrymple & Toivonen (2008, 2013) suggest analyzing resultative constructions and/or caused-motion constructions

<sup>16</sup>http://www.amazon.com/review/R3IG4M3Q6YYNFT, 2018-02-20

<sup>17</sup>http://www.scribd.com/doc/64013640/Tulare-County-medical-cannabis-cultivation-ordinance, 05.03.2016

<sup>18</sup>http://www.jouetthouse.org/index.php?option=com\_content&view=article&id=56&Itemid=63, 21.07.2012

#### 21 Phrasal vs. lexical analyses

as phrasal constructions.<sup>19</sup> As was argued in Müller (2006) this is incompatible with the assumption of lexical integrity. Lexical integrity means that word formation happens before syntax and that the morphological structure is inaccessible to syntactic processes (Bresnan & Mchombo 1995).<sup>20</sup> Let us consider a concrete example, such as (20):

	- b. die the in into Stücke pieces / blutig bloody getanzten danced Schuhe shoes
	- c. \* die the getanzten danced Schuhe shoes

The shoes are not a semantic argument of *tanzt*. Nevertheless the referent of the NP that is realized as accusative NP in (20a) is the element the adjectival participle in (20b) predicates over. Adjectival participles like the one in (20b) are derived from a passive participle of a verb that governs an accusative object. If the accusative object is licensed phrasally by configurations like the one in (20a), then it is not possible to explain why the participle *getanzten* can be formed despite the absence of an accusative object in the valence specification of the verb. See Müller (2006: Section 5) for further examples of the interaction of resultatives and morphology. The conclusion drawn by Dowty (1978: 412) and Bresnan (1982b: 21) in the late 70s and early 80s is that phenomena which feed morphology should be treated lexically. The natural analysis in frameworks like HPSG, CG, CxG, and LFG is therefore one that assumes a lexical rule for the licensing of resultative constructions. See Verspoor (1997), Wechsler (1997), Wechsler & Noh (2001), Wunderlich (1992: 45; 1997: 120–126), Kaufmann & Wunderlich (1998), Müller (2002a: Chapter 5), Kay (2005), and Simpson (1983) for lexical proposals in some of these frameworks. The lexical approach assumes that the lexical item for the mono-valent *tanz*- is related to another lexical item for *tanz*- that selects an object and a result predicate in addition to the subject selected by the mono-valent variant. Inflection and adjective derivation apply to this derived stem and the respective results can be used in (20a) or (20b).

This argument for a lexical treatment of resultative constructions is similar to the one that was discussed in connection with the GPSG representation of valence in Section 5.5: morphological processes have to be able to see the valence of the element they apply to.

<sup>19</sup>Asudeh & Toivonen (2014: Section 2.3) argue that their account is not constructional. If a construction is a form-meaning pair, their account is constructional, since a certain c-structure is paired with a semantic contribution. Asudeh & Toivonen (2014: Section 2.2) compare their approach with approaches in Constructional HPSG (Sag 1997) and Sign-Based Construction Grammar (see Section 10.6.2), which they term constructional. The only difference between these approaches and the approach by Asudeh, Dalrymple & Toivonen is that the constructions in the HPSG-based theories are modeled using types and hence have a name.

<sup>20</sup>Asudeh et al. (2013: 14) claim that the Swedish Directed Motion Construction does not interact with derivational morphology. However, the parallel German construction does interact with derivational morphology. The absence of this interaction in Swedish can be explained by other factors of Swedish grammar and given this I believe it to be more appropriate to assume an analysis that captures both the German and the Swedish data in the same way.

This is not the case if arguments are introduced by phrasal configurations after the level of morphology.

Asudeh, Dalrymple & Toivonen's papers are about the concept of lexical integrity and about constructions. Asudeh & Toivonen (2014) replied to our target article and pointed out (again) that their template approach makes it possible to specify the functional structure of words and phrases alike. In the original paper they discussed the Swedish word *vägen*, which is the definite form of *väg* 'way'. They showed that the f-structure is parallel to the f-structure for the English phrase *the way*. In our reply (2014b), we gave in too early, I believe. Since the point is not about being able to provide the f-structure of words, the point is about morphology, that is – in LFG terms – about deriving the fstructure by a morphological analysis. More generally speaking, one wants to derive all properties of the involved words, that is, their valence, their meaning, and the linking of this meaning to their dependents. What we used in our argument based on the sentences in (20) was parallel to what Bresnan (1982b: 21; 2001: 31) used in her classical argument for a lexical treatment of the passive. So either Bresnan's argument (and ours) is invalid or both arguments are valid and there is a problem for Asudeh, Dalrymple & Toivonen's approach and for phrasal approaches in general. I want to give another example that was already discussed in Müller (2006: 869) but was omitted in Müller & Wechsler (2014a) due to space limitations. I will first point out why this example is problematic for phrasal approaches and then explain why it is not sufficient to be able to assign certain f-structures to words: in (21a), we are dealing with a resultative construction. According to the plugging approach, the resultative meaning is contributed by a phrasal construction into which the verb *fischt* is inserted. There is no lexical item that requires a resultative predicate as its argument. If no such lexical item exists, then it is unclear how the relation between (21a) and (21b) can be established:

	- b. wegen because der of.the *Leerfischung* empty.fishing der of.the Nordsee<sup>21</sup> North.Sea 'because of the fishing that resulted in the North Sea being empty'

As Figure 21.1 on the next page shows, both the arguments selected by the heads and the structures are completely different. In (21b), the element that is the subject of the related construction in (21a) is not realized. As is normally the case in nominalizations, it is possible to realize it in a PP with the preposition *durch* 'by':

(22) wegen because der of.the Leerfischung empty.fishing der of.the Nordsee North.Sea durch by die the Anrainerstaaten neighboring.states 'because of the fishing by the neighboring states that resulted in the North Sea being empty'

If one assumes that the resultative meaning comes from a particular configuration in which a verb is realized, there would be no explanation for (21b) since no verb is involved in the analysis of this example. One could of course assume that a verb stem is

<sup>21</sup>taz, 20.06.1996, p. 6.

Figure 21.1: Resultative construction and nominalization

inserted into a construction both in (21a) and (21b). The inflectional morpheme -*t* and the derivational morpheme -*ung* as well as an empty nominal inflectional morpheme would then be independent syntactic components of the analysis. However, since Goldberg (2003b: 119) and Asudeh et al. (2013) assume lexical integrity, only entire words can be inserted into syntactic constructions and hence the analysis of the nominalization of resultative constructions sketched here is not an option for them.

One might be tempted to try and account for the similarities between the phrases in (21) using inheritance. One would specify a general resultative construction standing in an inheritance relation to the resultative construction with a verbal head and the nominalization construction. I have discussed this proposal in more detail in Müller (2006: Section 5.3). It does not work as one needs embedding for derivational morphology and this cannot be modeled in inheritance hierarchies (Krieger & Nerbonne (1993), see also Müller (2006) for a detailed discussion).

It would also be possible to assume that both constructions in (23), for which structures such as those in Figure 21.1 would have to be assumed, are connected via metarules.22,<sup>23</sup>

	- b. [ Det [ [ Adj V -ung ] ] NP[*gen*] ]

The construction in (23b) corresponds to Figure 21.2.<sup>24</sup> The genitive NP is an argument of the adjective. It has to be linked semantically to the subject slot of the adjective.

<sup>22</sup>Goldberg (p. c. 2007, 2009) suggests connecting certain constructions using GPSG-like metarules. Deppermann (2006: 51), who has a more Croftian view of CxG, rules this out. He argues for active/passive alternations that the passive construction has other information structural properties. Note also that GPSG metarules relate phrase structure rules, that is, local trees. The structure in Figure 21.2, however, is highly complex.

<sup>23</sup>The structure in (23b) violates a strict interpretation of lexical integrity as is commonly assumed in LFG. Booij (2005, 2009), working in Construction Grammar, subscribes to a somewhat weaker version, however.

<sup>24</sup>I do not assume zero affixes for inflection. The respective affix in Figure 21.2 is there to show that there is structure. Alternatively one could assume a unary branching rule/construction as is common in HPSG/ Construction Morphology.

Figure 21.2: Resultative construction and nominalization

Alternatively, one could assume that the construction only has the form [Adj V -*ung* ], that is, that it does not include the genitive NP. But then one could also assume that the verbal variant of the resultative construction has the form [OBL V] and that Sbj and Obj are only represented in the valence lists. This would almost be a lexical analysis, however.

Turning to lexical integrity again, I want to point out that all that Asudeh & Toivonen can do is assign some f-structure to the N in Figure 21.2. What is needed, however, is a principled account of how this f-structure comes about and how it is related to the resultative construction on the sentence level.

My argument regarding -*ung*-nominalization from Müller (2006: Section 5.1) was also taken up by Bruening (2018). I noted that *Leerfischung* should not be analyzed as a compound of *leer* and *Fischung* with *Fischung* being the result of combining -*ung* with the intransitive verb lexeme *fisch*- but rather that -*ung* should apply to a version of *fisch*that selects for a result predicate and its subject and that this version of *Fischung* is then combined with the result predicate it selects for. Bruening argued against my analysis claiming that all arguments of nouns are optional and that my analysis would predict that there is a noun *Fischung* with a resultative meaning but without the resultative predicate. As I pointed out in Müller (2006: 869), a noun *Fischung* does exist but it refers to parts of a boat, not to an event nominalization. Bruening concludes that a syntactic approach is needed and that -*ung* applies to the combination of *leer* and *fisch*-.

Now, while it is generally true that arguments of nouns can be omitted there are situations in which the argument cannot be ommitted without changing the meaning. Sebastian Nordhoff (p. c. 2017) found the following examples:

	- b. Spaßmacher joke.maker 'jester'

For example, a *Bartträger* is somebody who has a beard. If one omitted the first part of the compound, one would get a *Träger* 'carrier'; a relation to the original sense cannot be established. Similarly a *Spaßmacher* is literally a 'joke.maker'. Without the first part of the compound this would be *Macher*, which translates as *doer* or *action man*. What the examples above have in common is the following: the verbal parts are frequent and in the most frequent uses of the verb the object is concrete. In the compounds above the first part is unusual in that it is abstract. If the first element of the compound is omitted, we get the default reading of the verb, something that is incompatible with the meaning of the verb in the complete compound.

The contrast between *Leerfischung* and #*Fischung* can be explained in a similar way: the default reading of *fisch*- is the one without resultative meaning. Without the realized predicate we get the derivation product *Fischung*, which does not exist (with the relevant meaning).

So, in a lexical analysis of resultatives we have to make sure that the resultative predicate is not optional and this is what my analysis does. It says that *fisch*- needs a resultative predicate. It does not say that it *optionally* takes a result predicate. What is needed is a careful formulation of a theory of what can be dropped that ensures that no arguments are omitted that are crucial for recognizing the sense of a certain construction/collocation. The nominalization rules have to be set up accordingly.<sup>25</sup> I do not see

<sup>25</sup>Note that this also applies to lexical theories of idioms of the kind suggested by Sag (2007) and Kay, Sag & Flickinger (2015). If one analyses idioms like *kick the habit* and *kick the bucket* with a special lexical item for *kick*, one has to make sure that the object of *kick* is not omitted since the idioms are not recognizable without the object.

any problems for the analyses of resultatives and particle verbs that I suggested in Müller (2002a, 2003c).

Before I turn to approaches with radical underspecification of argument structure in the next section, I want to comment on a more recent paper by Asudeh, Giorgolo & Toivonen (2014). The authors discuss the phrasal introduction of cognate objects and benefactives. (25a) is an example of the latter construction.

(25) a. The performer sang the children a song.

b. The children were sung a song.

According to the authors, the noun phrase *the children* is not an argument of *sing* but contributed by the c-structure rule that optionally licenses a benefactive.

$$\begin{array}{ccccc} \text{(26)} & \text{V}' & \rightarrow & \text{V} & \text{DP} & \text{DP} \\ & & \uparrow = \downarrow & \text{(\uparrow OBV)} = \downarrow & \text{(\uparrow OBV)} = \downarrow \\ & & \text{(\oplus BENDECTIVE)} \end{array}$$

Whenever this rule is called, the template Benefactive can add a benefactive role and the respective semantics, provided this is compatible with the verb that is inserted into the structure. The authors show how the mappings for the passive example in (25b) work, but they do not provide the c-structure rule that licenses such examples. Unless one assumes that arguments in (26) can be optional (see below), one would need a cstructure rule for passive VPs and this rule has to license a benefactive as well.<sup>26</sup> So it would be:

$$\begin{array}{rcl} \text{(27)} & \text{V}' & \rightarrow & \text{V[pass]} \\ & & \uparrow = \downarrow & \text{(\uparrow OBJ}\_{\theta}) = \downarrow \\ & & \text{(\rightsquigarrow BENEFACTEV E)} \end{array}$$

Note that a benefactive cannot be added to just any verb: Adding a benefactive to an intransitive verb as in (28a) is out and the passive that would correspond to (28a) is ungrammatical as well, as (28b) shows:

	- b. \* The children were laughed.

The benefactive template would account for the ungrammaticality of (28) since it requires an arg<sup>2</sup> to be present and the intransitive *laugh* does not have an arg<sup>2</sup> , but this account would not extend to other verbs. For example, the template would admit the sentences in (29b–c) since *give* with prepositional object has an arg<sup>2</sup> (Kibort 2008: 317).

	- b. \* He gave Peter it to Mary.
	- c. \* Peter was given it to Mary.

<sup>26</sup>See for instance Bergen & Chang (2005) and van Trijp (2011) for Construction Grammar analyses that assume active and passive variants of phrasal constructions. See Cappelle (2006) on allostructions in general.

#### 21 Phrasal vs. lexical analyses

*give* could combine with the *to* PP semantically and would then be equivalent to a transitive verb as far as resources are concerned (looking for an arg<sup>1</sup> and an arg<sup>2</sup> ). The benefactive template would map the arg<sup>2</sup> to arg<sup>3</sup> and hence (29b) would be licensed. Similar examples can be constructed with other verbs that take prepositional objects, for instance *accuse sb. of something*. Since there are verbs that take a benefactive and a PP object as shown by (30), (29b) cannot be ruled out with reference to non-existing cstructure rules.

(30) I buy him a coat for hundred dollar.

So, if the c-structure is to play a role in argument structure constructions at all, one could not just claim that all c-structure rules optionally introduce a benefactive argument. Therefore there is something special about the two rules in (26) and (27). The problem is that there is no relation between these rules. They are independent statements saying that there can be a benefactive in the active and that there can be one in the passive. This is what Chomsky (1957: 43) criticized in 1957 with respect to simple phrase structure grammar and this was the reason for the introduction of transformations. Bresnan-style LFG captured the generalizations by lexical rules (Bresnan 1978, 1982b) and later by lexical rules in combination with Lexical Mapping Theory (Toivonen 2013). But if elements are added outside the lexical representations, the representations where these elements are added have to be related too. One could say that our knowledge about formal tools has changed since 1957. We now can use inheritance hierarchies to capture generalizations. So one can assume a type (or a template) that is the supertype of all those c-structure rules that introduce a benefactive. But since not all rules allow for the introduction of a benefactive element, this basically amounts to saying: c-structure rule A, B, and C allow for the introduction of a benefactive. In comparison lexical rule-based approaches have one statement introducing the benefactive. The lexical rule states what verbs are appropriate for adding a benefactive and syntactic rules are not affected.

Asudeh (p. c. May 2016) and an anonymous reviewer of HeadLex16 pointed out to me that the rules in (26) and (27) can be generalized over if the arguments in (26) are made optional. (31) shows the rule in (26) with the DPs marked as optional by the brackets enclosing them.

$$\begin{array}{ccccc} \text{(31)} & \text{V}' & \rightarrow & \text{V} & \text{(DP)} & \text{(DP)}\\ & & \uparrow = \downarrow & \text{(\uparrow OBy)} = \downarrow & \text{(\uparrow OBy)} = \downarrow\\ \text{(\n@BERTACIVE)} & & & & & \end{array}$$

Since both of the DPs are optional (31) is equivalent to a specification of four rules, namely (26) and the three versions of the rule in (32):

$$\begin{array}{ccccc} \text{(32)} & \text{a. } \text{V}' & \rightarrow & \text{V} & \text{D} \\ & & \uparrow = \downarrow & \text{ (\$\uparrow\$ OBJ}\_{\theta}\$) = \downarrow \\ & & \text{(\$\oplus\$ BENEFACTIVE )} \end{array}$$

$$\begin{array}{ccccc} \text{b. } \text{V}' & \rightarrow & \text{V} & \text{DP} \\ & & \uparrow = \downarrow & \text{(\uparrow OBI)} = \downarrow \\ & & \text{(\otimes BENEFACTVE \text{ })} \\ \text{c. } \text{V}' & \rightarrow & \text{V} \\ & & \uparrow = \downarrow \\ & & \text{(\otimes BENEFACTVE \text{ })} \end{array}$$

(32a) is the variant of (31) in which the OBJ is omitted (needed for (33a)), (32b) is the variant in which the OBJ is omitted (needed for (33b)) and in (32c) both DPs are omitted (needed for (33c)).

	- b. What kind of picture did the kids draw the teacher?
	- c. Such divine and elaborate meals, she had never been prepared before, not even by her ex-husband who was a professional chef.

Hence, (31) can be used for V′ s containing two objects, for V′ s in the passive containing just one object, for V′ with the secondary object extraced and for V′ in the passive with the secondary object extracted. The template-based approach does not overgenerate since the benefactive template is specified such that it requires the verb it applies to to select for an ARG2. Since intransitives like *laugh* do not select for an ARG2 a benefactive cannot be added. So, in fact the actual configuration in the c-structure rule does only play a minor role: the account mainly relies on semantics and resource sensitivity. There is one piece of information that is contributed by the c-structure rule: it constrains the grammatical functions of arg<sup>2</sup> and arg<sup>3</sup> , which are disjunctively specified in the template definitions for arg<sup>2</sup> and arg<sup>3</sup> : arg<sup>2</sup> can be realized as SUBJ or as OBJ. In the active case arg<sup>1</sup> will be the subj and because of function argument bi-uniqueness (Bresnan et al. 2016: 334) no other element can be the SUBJ and hence arg<sup>2</sup> has to be an OBJ. arg<sup>3</sup> can be either an OBJ or an OBJ . Since arg<sup>2</sup> is an OBJ in the active, arg<sup>3</sup> has to be an OBJ in the active. In the passive case arg<sup>1</sup> is suppressed or realized as OBL (*by* PP). arg<sup>2</sup> will be realized as subj (since English requires a subj to be realized) and arg<sup>3</sup> could be realized as either OBJ or OBJ . This is not constrained by the template specifications so far. Because of the optionality in (31), either the OBJ or the OBJ function could be chosen for arg<sup>3</sup> . This means that either Lexical Mapping Theory has to be revised or one has to make sure that the c-structure rule used in the passive of benefactives states the grammatical function of the object correctly. Hence one would need the c-structure rule in (27) and then there would be the missing generalization I pointed out above.

If one finds a way to set up the mappings to grammatical functions without reference to c-structures in lexical templates, this means that it is not the case that an argument is added by a certain configuration the verb enters in. Since any verb may enter (33) and since the only important thing is the interaction between the lexical specification of the verb and the benefactive template, the same structures would be licensed if the benefactive template were added to the lexical items of verbs directly. The actual configuration would not constrain anything. All (alleged) arguments from language acquisition and

#### 21 Phrasal vs. lexical analyses

psycholinguistics (see Sections 21.6 and 21.7) for phrasal analyses would not apply to such a phrasal account.

If the actual c-structure configuration does not contribute any restrictions as to what arguments may be realized and what grammatical functions they get, the difference between the lexical use of the benefactive template and the phrasal introduction as executed in (31) is really minimal. However, there is one area in grammar where there is a difference: coordination. As Müller & Wechsler (2014a: Section 6.1) pointed out, it is possible to coordinate ditransitive verbs with verbs that appear together with a benefactive. (34) is one of their examples:

(34) She then offered and made me a wonderful espresso — nice.<sup>27</sup>

If the benefactive information is introduced at the lexical level the coordinated verbs basically have the same selectional requirements. If the benefactive information is introduced at the phrasal level *baked* and *gave* are coordinated and then the benefactive constraints are imposed on the result of the coordination by the c-structure rule. While it is clear that the lexical items that would be assumed in a lexical approach can be coordinated in a symmetric coordination, problems seem to arise for the phrasal approach. It is unclear how the asymmetric coordination of the mono- and ditransitive verbs can be accounted for and how the constraints of the benefactive template are distributed over the two conjuncts. The fact that the benefactive template is optional does not help here since the optionality means that the template is either called or it is not. The situation is depicted in Figure 21.3. The optionality of the template call in the top figure basically corresponds to the disjunction of the two trees in the lower part of the figure. The optionality does not allow for a distribution to one of the daughters in a coordination.

Mary Dalrymple (p. c. 2016) pointed out that the coordination rule that coordinates two verbs can be annotated with two optional calls of the benefactive template.

$$\begin{array}{rcl} \text{(35)} & \text{V} & \rightarrow & \text{V} & \text{Conj} & \text{V} \\ & & \text{(\u0\text{B}\text{N}\text{ENFACTIVE})} & & & \text{(\u0\text{B}\text{ENFACTIVE})} \\ \end{array}$$

In an analysis of the examples in (34), the template in rule (26) would not be called but the respective templates in (35) would be called instead. While this does work technically, similar coordination rules would be needed for all other constructions that introduce arguments in c-structures. Furthermore, the benefactive would have to be introduced in several unrelated places in the grammar and finally the benefactive is introduced at nodes consisting of a single verb without any additional arguments being licensed, which means that one could have gone for the lexical approach right away.

Timm Lichte (p. c. 2016) pointed out an important consequence of a treatment of coordination via (35): since the result of the coordination behaves like a normal ditransitive verb it would enter the normal ditransitive construction. Toivonen's original motivation for a phrasal analysis was the observation that extraction out of and passivization of benefactive constructions is restricted for some speakers (Toivonen 2013: 416):

<sup>27</sup>http://www.thespinroom.com.au/?p=102 2012-07-07

Figure 21.3: The optionality of a call of a template corresponds to a disjunction.

	- b. \* Which teacher did the kids draw a picture?
	- b. My sister was given a soap statue of Bugs Bunny (by a famous sculptor).

The ungrammaticality of the passive in (37a) in comparison to the grammaticality of the passive with a normal ditransitive construction in (37b) would be accounted for by assuming a fixed phrasal configuration that is the only possibility to license benefactives in a grammar. But by having c-structure rules like (35) in the grammar there would be a way to circumvent the c-structure rule for benefactives. If sentences with coordinations can be analyzed by coordinating lexical verbs and then using the c-structure rule for ditransitives, it would be predicted that none of the constraints on passive and extraction that are formulated at the phrasal level would hold. This is contrary to the facts: by coordinating items with strong restrictions (imposed by the benefactive templates) with items with weaker restrictions, one gets a coordination structure that is at least as restrictive as the items that are coordinated. One does not get less restrictive by coordinating items. See Müller (2018a) for further discussion of ditransitives and benefactives and extraction and passivization.

In Müller & Wechsler (2014a) we argued that the approach to Swedish Caused-Motion Constructions in Asudeh et al. (2008, 2013) would not carry over to German since the German construction interacts with derivational morphology. Asudeh & Toivonen

(2014) argued that Swedish is different from German and hence there would not be a problem. However, the situation is different with the benefactive constructions. Although English and German do differ in many respects, both languages have similar dative constructions:

	- b. Er he buk baked ihr her.dat einen a.acc Kuchen. cake

Now, the analysis of the free constituent order in German was explained by assuming binary branching structures in which a VP node is combined with one of its arguments or adjuncts (see Section 7.4). The c-structure rule is repeated in (39):

$$\begin{array}{rcl} \text{(39)} & \text{VP} & \rightarrow & \text{NP} & \text{VP} \\ & & \text{(\uparrow SUBJ | OBJ | OBJ\_\theta)} = \downarrow & \uparrow = \downarrow \end{array}$$

The dependent elements contribute to the f-structure of the verb and coherence/completeness ensure that all arguments of the verb are present. One could add the introduction of the benefactive argument to the VP node of the right-hand side of the rule. However, since the verb-final variant of (38b) would have the structure in (40), one would get spurious ambiguities, since the benefactive could be introduced at every node:

$$\begin{array}{ccccc}\text{(40)} & \text{weil} & \begin{bmatrix} \text{\_{\tiny{VP}}} \text{ er } \left[ \begin{smallmatrix} \text{\_{VP}} \text{ \_{VP}} \end{smallmatrix} \text{\'einen} \text{ Kuchen} \begin{bmatrix} \text{\_{VP}} \left[ \begin{smallmatrix} \text{\_{V}} \text{ bulk} \end{smallmatrix} \end{smallmatrix} \right] \end{array} \end{array}$$

So the only option seems to be to introduce the benefactive at the rule that got the recursion going, namely the rule that projected the lexical verb to the VP level. The rule (39) from page 239 is repeated as (41) for convenience.

(41) VP → (V) ↑ = ↓

Note also that benefactive datives appear in adjectival environments as in (42):

	- the a.acc cake his.dat wife baking man

'the man who is baking a cake for her'

In order to account for these datives one would have to assume that the adjective-to-AP rule that would be parallel to (41) introduces the dative. The semantics of the benefactive template would have to somehow make sure that the benefactive argument is not added to intransitive verbs like *lachen* 'to laugh' or participles like *lachende* 'laughing'. While this may be possible, I find the overall approach unattractive. First it does not have anything to do with the original constructional proposal but just states that the

benefactive may be introduced at several places in syntax, second the unary branching syntactic rule is applying to a lexical item and hence is very similar to a lexical rule and third the analysis does not capture cross-linguistic commonalities of the construction. In a lexical rule-based approach as the one that was suggested by Briscoe & Copestake (1999: Section 5), a benefactive argument is added to certain verbs and the lexical rule is parallel in all languages that have this phenomenon (Müller 2018a). The respective languages differ simply in the way the arguments are realized with respect to their heads. In languages that have adjectival participles, these are derived from the respective verbal stems. The morphological rule is the same independent of benefactive arguments and the syntactic rules for adjectival phrases do not have to mention benefactive arguments. I discuss the template-based approach in more detail in Müller (2018a). This book also contains a fully worked out analysis of the benefactive and the resultative constructions and their interactions in German and English in the framework of HPSG.

# **21.3 Radical underspecification: the end of argument structure?**

### **21.3.1 Neo-Davidsonianism**

In the last section we examined proposals that assume that verbs come with certain argument roles and are inserted into prespecified structures that may contribute additional arguments. While we showed that this is not without problems, there are even more radical proposals that the construction adds all agent arguments, or even all arguments. The notion that the agent argument should be severed from its verbs is put forth by Marantz (1984, 1997), Kratzer (1996), Embick (2004) and others. Others suggest that no arguments are selected by the verb. Borer (2003) calls such proposals *exoskeletal* since the structure of the clause is not determined by the predicate, that is, the verb does not project an inner "skeleton" of the clause. Counter to such proposals are *endoskeletal* approaches, in which the structure of the clause is determined by the predicate, that is, lexical proposals. The radical exoskeletal approaches are mainly proposed in Mainstream Generative Grammar (Borer 1994, 2003, 2005, Schein 1993, Hale & Keyser 1997, Lohndal 2012) but can also be found in HPSG (Haugereid 2009). We will not discuss these proposals in detail here, but we review the main issues insofar as they relate to the question of lexical argument structure.<sup>28</sup> We conclude that the available empirical evidence favors the lexical argument structure approach over such alternatives.

Exoskeletal approaches usually assume some version of Neo-Davidsonianism. Davidson (1967) argued for an event variable in the logical form of action sentences (43a). Dowty (1989) coined the term *neo-Davidsonian* for the variant in (43b), in which the verb translates to a property of events, and the subject and complement dependents are translated as arguments of secondary predicates such as *agent* and *theme*. <sup>29</sup> Kratzer

<sup>28</sup>See Müller (2010a: Section 11.11.3) for a detailed discussion of Haugereid's approach.

<sup>29</sup>Dowty (1989) called the system in (43a) an *ordered argument system*.

(1996) further noted the possibility of mixed accounts such as (43c), in which the agent (subject) argument is severed from the *kill*′ relation, but the theme (object) remains an argument of the *kill*′ relation.<sup>30</sup>


Kratzer (1996) observed that a distinction between Davidsonian, neo-Davidsonian and mixed can be made either "in the syntax" or "in the conceptual structure" (Kratzer 1996: 110–111). For example, on a lexical approach of the sort we advocate here, any of the three alternatives in (43) could be posited as the semantic content of the verb *kill*. A lexical entry for *kill* in the mixed model is given in (44).

$$\begin{array}{cc} \text{(44)} & \begin{cases} \text{PHON} & \text{( $k$ } \text{ill} \text{ )}\\ \text{ARG-ST} & \left< \text{NP}\_x, \text{NP}\_y \right> \\ \text{CONTENT} & \text{kill}(e, y) \wedge agent(e, x) \end{cases} \\ \end{array}$$

In other words, the lexical approach is neutral on the question of the "conceptual structure" of eventualities, as noted already in a different connection in Section 21.1.4. For this reason, certain semantic arguments for the neo-Davidsonian approach, such as those put forth by Schein (1993: Chapter 4) and Lohndal (2012), do not directly bear upon the issue of lexicalism, as far as we can tell.

But Kratzer (1996), among others, has gone further and argued for an account that is neo-Davidsonian (or rather, mixed) "in the syntax". Kratzer's claim is that the verb specifies only the internal argument(s), as in (45a) or (45b), while the agent (external argument) role is assigned by the phrasal structure. On the "neo-Davidsonian in the syntax" view, the lexical representation of the verb has no arguments at all, except the event variable, as shown in (45c).


On such accounts, the remaining dependents of the verb receive their semantic roles from silent secondary predicates, which are usually assumed to occupy the positions of functional heads in the phrase structure. An Event Identification rule identifies the event variables of the verb and the silent light verb (Kratzer 1996: 22); this is why the existential quantifiers in (43) have been replaced with lambda operators in (45). A standard term for the agent-assigning silent predicate is "little *v*" (see Section 4.1.4 on little *v*). These

<sup>30</sup>The event variable is shown as existentially bound, as in Davidson's original account. As discussed below, in Kratzer's version it must be bound by a lambda operator instead.

extra-lexical dependents are the analogs of the ones contributed by the constructions in Construction Grammar.

In the following subsections we address arguments that have been put forth in favor of the little *v* hypothesis, from idiom asymmetries (Section 21.3.2) and deverbal nominals (Section 21.3.3). We argue that the evidence actually favors the lexical view. Then we turn to problems for exoskeletal approaches, from idiosyncratic syntactic selection (Section 21.3.4) and expletives (Section 21.3.5). We conclude with a look at the treatment of idiosyncratic syntactic selection under Borer's exoskeletal theory (Section 21.3.7), and a summary (Section 21.3.8).

### **21.3.2 Little** *v* **and idiom asymmetries**

Marantz (1984) and Kratzer (1996) argued for severing the agent from the argument structure as in (45a), on the basis of putative idiom asymmetries. Marantz (1984) observed that while English has many idioms and specialized meanings for verbs in which the internal argument is the fixed part of the idiom and the external argument is free, the reverse situation is considerably rarer. To put it differently, the nature of the role played by the subject argument often depends on the filler of the object position, but not vice versa. To take Kratzer's examples (Kratzer 1996: 114):

	- b. kill a conversation
	- c. kill an evening watching TV
	- d. kill a bottle (i.e. empty it)
	- e. kill an audience (i.e., wow them)

On the other hand, one does not often find special meanings of a verb associated with the choice of subject, leaving the object position open (examples from Marantz (1984: 26)):

	- b. Everyone is always killing NP.
	- c. The drunk refused to kill NP.
	- d. Silence certainly can kill NP.

Kratzer observes that a mixed representation of *kill* as in (48a) allows us to specify varying meanings that depend upon its sole NP argument.

	- b. If is a time interval, then (, ) = ℎ if is an event of wasting *a* If is animate, then (, ) = ℎ if is an event in which dies … etc.

On the polyadic (Davidsonian) theory, the meaning could similarly be made to depend upon the filler of the agent role. On the polyadic view, "there is no technical obstacle" (Kratzer 1996: 116) to conditions like those in (48b), except reversed, so that it is the filler of the agent role instead of the theme role that affects the meaning. But, she writes, this could not be done if the agent is not an argument of the verb. According to Kratzer, the agent-severed representation (such as (48a)) disallows similar constraints on the meaning that depend upon the agent, thereby capturing the idiom asymmetry.

But as noted by Wechsler (2005), "there is no technical obstacle" to specifying agentdependent meanings even if the Agent has been severed from the verb as Kratzer proposes. It is true that there is no variable for the agent in (48a). But there is an event variable *e*, and the language user must be able to identify the agent of *e* in order to interpret the sentence. So one could replace the variable *a* with "the agent of *e*" in the expressions in (48b), and thereby create verbs that violate the idiom asymmetry.

While this may seem to be a narrow technical or even pedantic point, it is nonetheless crucial. Suppose we try to repair Kratzer's argument with an additional assumption: that modulations in the meaning of a polysemous verb can only depend upon arguments of the *relation* denoted by that verb, and not on other participants in the event. Under that additional assumption, it makes no difference whether the agent is severed from the lexical entry or not. For example, consider the following (mixed) neo-Davidsonian representation of the semantic content in the lexical entry of *kill*:

### (49) *kill*: [(, ) ∧ (, )]

Assuming that sense modulations can only be affected by arguments of the *kill(e,y)* relation, we derive the idiom asymmetry, even if (49) is the lexical entry for *kill*. So suppose that we try to fix Kratzer's argument with a different assumption: that modulations in the meaning of a polysemous verb can only depend upon an argument of the lexically denoted function. Kratzer's "neo-Davidsonian in the syntax" lexical entry in (45a) lacks the agent argument, while the lexical entry in (49) clearly has one. But Kratzer's entry still fails to predict the asymmetry because, as noted above, it has the *e* argument and so the sense modulation can be conditioned on the "agent of *e*". As noted above, that event argument cannot be eliminated (for example through existential quantification) because it is needed in order to undergo event identification with the event argument of the silent light verb that introduces the agent (Kratzer 1996: 22).

Moreover, recasting Kratzer's account in lexicalist terms allows for verbs to vary. This is an important advantage, because the putative asymmetry is only a tendency. The following are examples in which the subject is a fixed part of the idiom and there are open slots for non-subjects:<sup>31</sup>

(50) a. A little bird told X that S.

'X heard the rumor that S.'

b. The cat's got X's tongue. 'X cannot speak.'

<sup>31</sup>(50a) is from Nunberg, Sag & Wasow (1994: 526), (50b) from Bresnan (1982a: 349–350), and (50c) from Bresnan (1982a: 349–350).

c. What's eating X? 'Why is X so galled?'

Further data and discussion of subject idioms in English and German can be found in Müller (2007a: Section 3.2.1).

The tendency towards a subject-object asymmetry plausibly has an independent explanation. Nunberg, Sag & Wasow (1994) argue that the subject-object asymmetry is a side-effect of an animacy asymmetry. The open positions of idioms tend to be animate while the fixed positions tend to be inanimate. Nunberg et al. (1994) derive these animacy generalizations from the figurative and proverbial nature of the metaphorical transfers that give rise to idioms. If there is an independent explanation for this tendency, then a lexicalist grammar successfully encodes those patterns, perhaps with a mixed neo-Davidsonian lexical decomposition, as explained above (see Wechsler (2005) for such a lexical account of the verbs *buy* and *sell*). But the little *v* hypothesis rigidly predicts this asymmetry for all agentive verbs, and that prediction is not borne out.

# **21.3.3 Deverbal nominals**

An influential argument against lexical argument structure involves English deverbal nominals and the causative alternation. It originates from a mention in Chomsky (1970), and is developed in detail by Marantz (1997); see also Pesetsky (1995) and Harley & Noyer (2000). The argument is often repeated, but it turns out that the empirical basis of the argument is incorrect, and the actual facts point in the opposite direction, in favor of lexical argument structure (Wechsler 2008b,a).

Certain English causative alternation verbs allow optional omission of the agent argument (51), while the cognate nominal disallows expression of the agent (52):

	- b. that tomatoes grow
	- b. the tomatoes' growth, the growth of the tomatoes

In contrast, nominals derived from obligatorily transitive verbs such as *destroy* allow expression of the agent, as shown in (54a):

	- b. \* that the city destroyed
	- b. the city's destruction

Following a suggestion by Chomsky (1970), Marantz (1997) argued on the basis of these data that the agent role is lacking from lexical entries. In verbal projections like (51) and (53) the agent role is assigned in the syntax by little *v*. Nominal projections like (52) and (54) lack little *v*. Instead, pragmatics takes over to determine which agents can be expressed by the possessive phrase: the possessive can express "the sort of agent implied by an event with an external rather than an internal cause" because only the former can "easily be reconstructed" (quoted from Marantz (1997: 218)). The destruction of a city has a cause external to the city, while the growth of tomatoes is internally caused by the tomatoes themselves (Smith 1970). Marantz points out that this explanation is unavailable if the noun is derived from a verb with an argument structure specifying its agent, since the deverbal nominal would inherit the agent of a causative alternation verb.

The empirical basis for this argument is the putative mismatch between the allowability of agent arguments, across some verb-noun cognate pairs: e.g., *grow* allows the agent but *growth* does not. But it turns out that the *grow*/*growth* pattern is rare. Most deverbal nominals precisely parallel the cognate verb: if the verb has an agent, so does the noun. Moreover, there is a ready explanation for the exceptional cases that exhibit the *grow*/*growth* pattern (Wechsler 2008a). First consider non-alternating theme-only intransitives (unaccusatives), as in (55) and non-alternating transitives as in (56). The pattern is clear: if the verb is agentless, then so is the noun:

	- a. A letter arrived.
	- b. the arrival of the letter
	- c. \* The mailman arrived a letter.
	- d. \* the mailman's arrival of the letter
	- a. The army is destroying the city.
	- b. the army's destruction of the city

This favors the view that the noun inherits the lexical argument structure of the verb. For the anti-lexicalist, the badness of (55c) and (55d), respectively, would have to receive independent explanations. For example, on Harley and Noyer's 2000 proposal, (55c) is disallowed because a feature of the root ARRIVE prevents it from appearing in the context of *v*, but (55d) is instead ruled out because the cause of an event of arrival cannot be easily reconstructed from world knowledge. This exact duplication in two separate components of the linguistic system would have to be replicated across all non-alternating intransitive and transitive verbs, a situation that is highly implausible.

Turning to causative alternation verbs, Marantz's argument is based on the implicit generalization that noun cognates of causative alternation verbs (typically) lack the agent argument. But apart from the one example of *grow/growth*, there do not seem to be any clear cases of this pattern. Besides *grow(th)*, Chomsky (1970: examples (7c) and (8c)) cited two experiencer predicates, *amuse* and *interest*: *John amused (interested) the children with his stories* versus *\* John's amusement (interest) of the children with his stories*. But this was later shown by Rappaport (1983) and Dowty (1989) to have an independent aspectual explanation. Deverbal experiencer nouns like *amusement* and *interest*

typically denote a mental state, where the corresponding verb denotes an event in which such a mental state comes about or is caused. These result nominals lack not only the agent but all the eventive arguments of the verb, because they do not refer to events. Exactly to the extent that such nouns can be construed as representing events, expression of the agent becomes acceptable.

In a response to Chomsky (1970), Carlota Smith (1972) surveyed Webster's dictionary and found no support for Chomsky's claim that deverbal nominals do not inherit agent arguments from causative alternation verbs. She listed many counterexamples, including "*explode*, *divide*, *accelerate*, *expand*, *repeat*, *neutralize*, *conclude*, *unify*, and so on at length." (Smith 1972: 137). Harley and Noyer (2000) also noted many so-called "exceptions": *explode, accumulate, separate, unify, disperse, transform, dissolve/dissolution, detach(ment), disengage-(ment)*, and so on. The simple fact is that these are not exceptions because there is no generalization to which they can be exceptions. These long lists of verbs represent the norm, especially for suffix-derived nominals (in -*tion*, -*ment*, etc.). Many zero-derived nominals from alternating verbs also allow the agent, such as*change, release*, and *use*: *my constant change of mentors from 1992–1997*; *the frequent release of the prisoners by the governor*; *the frequent use of sharp tools by underage children* (examples from Borer (2003: fn. 13)).<sup>32</sup>

Like the experiencer nouns mentioned above, many zero-derived nominals lack event readings. Some reject all the arguments of the corresponding eventive verb, not just the agent: *\* the freeze of the water*, *\* the break of the window*, and so on. According to Stephen Wechsler, *his drop of the ball* is slightly odd, but *the drop of the ball* has exactly the same degree of oddness. The locution *a drop in temperature* matches the verbal one *The temperature dropped*, and both verbal and nominal forms disallow the agent: *\* The storm dropped the temperature. \* the storm's drop of the temperature*. In short, the facts seem to point in exactly the opposite direction from what has been assumed in this oft-repeated argument against lexical valence. Apart from the one isolated case of *grow/growth*, eventdenoting deverbal nominals match their cognate verbs in their argument patterns.

Turning to *grow/growth* itself, we find a simple explanation for its unusual behavior (Wechsler 2008a). When the noun *growth* entered the English language, causative (transitive) *grow* did not exist. The OED provides these dates of the earliest attestations of *grow* and *growth*:


Thus *growth* entered the language at a time when transitive *grow* did not exist. The argument structure and meaning were inherited by the noun from its source verb, and then preserved into present-day English. This makes perfect sense if, as we claim, words have predicate argument structures. Nominalization by *-th* suffixation is not productive

<sup>32</sup>Pesetsky (1995: 79, ex. (231)) assigns a star to *the thief's return of the money*, but it is acceptable to many speakers. The *Oxford English Dictionary* lists a transitive sense for the noun *return* (definition 11a), and corpus examples like *her return of the spoils* are not hard to find.

in English, so *growth* is listed in the lexicon. To explain why *growth* lacks the agent we need only assume that a lexical entry's predicate argument structure dictates whether it takes an agent argument or not. So even this one word provides evidence for lexical argument structure.

# **21.3.4 Idiosyncratic syntactic selections**

The notion of lexical valence structure immediately explains why the argument realization patterns are strongly correlated with the particular lexical heads selecting those arguments. It is not sufficient to have general lexical items without valence information and let the syntax and world knowledge decide about argument realizations, because not all realizational patterns are determined by the meaning. The form of the preposition of a prepositional object is sometimes loosely semantically motivated but in other cases arbitrary. For example, the valence structure of the English verb *depend* captures the fact that it selects an *on*-PP to express one of its semantic arguments:

	- b. John trusts (\*on) Mary. c. phon *depend* arg-st <sup>D</sup> NP , PP[*on*] E content *depend(x,y)*

Such idiosyncratic lexical selection is utterly pervasive in human language. The verb or other predicator often determines the choice between direct and oblique morphology, and for obliques, it determines the choice of adposition or oblique case. In some languages such as Icelandic even the subject case can be selected by the verb (Zaenen, Maling & Thráinsson 1985).

Selection is language-specific. English *wait* selects *for* (German *für*) while German *warten* selects *auf* 'on' with an accusative object:

	- b. Ich warte auf meinen Mann.
		- I wait on my man.acc

A learner has to acquire that *warten* has to be used with *auf* + accusative and not with other prepositions or other case. Similarly we have the verb *ergert* in Dutch that takes an *aan* PP while the German counterpart takes an *über* PP:

	- b. Kim Kim ärgert annoys sich refl über above Sandy. Sandy 'Kim is/gets annoyed at Sandy.'

There are language-internal niches with the same type of prepositional objects. For instance, *freuen* 'to rejoice over/about' and *lachen* 'laugh at/about' take *über* as well. But there is no general way to predict on semantic grounds which preposition has to be taken.

It is often impossible to find semantic motivation for case. In German there is a tendency to replace genitive (61a) with dative (61b) with no apparent semantic motivation:

	- b. daß that auch also hier here den the Opfern victims.dat des of.the Faschismus fascism gedacht remembered werde is […]<sup>33</sup> 'that the victims of fascism would be remembered here too'

The synonyms *treffen* and *begegnen* 'to meet' govern different cases (example from Pollard & Sag (1987: 126)).

	- b. Er he.nom begegnete met dem the.dat Mann. man

One has to specify the case that the respective verbs require in the lexical items of the verbs.<sup>34</sup>

A radical variant of the plugging approach is suggested by Haugereid (2009).<sup>35</sup> Haugereid (pages 12–13) assumes that the syntax combines a verb with an arbitrary combination of a subset of five different argument roles. Which arguments can be combined with a verb is not restricted by the lexical item of the verb.<sup>36</sup> A problem for such views is that the meaning of an ambiguous verb sometimes depends on which of its arguments are expressed. The German verb *borgen* has the two translations 'borrow' and 'lend', which basically are two different perspectives on the same event (see Kunze (1991, 1993) for an extensive discussion of verbs of exchange of possession). Interestingly, the dative object is obligatory only with the 'lend' reading (Müller 2010a: 403):

	- lend him the squirrel

'I lend the squirrel to him.'

I

<sup>33</sup>Frankfurter Rundschau, 07.11.1997, p. 6.

<sup>34</sup>Or at least mark the fact that *treffen* takes an object with the default case for objects and *begegnen* takes a dative object in German. See Haider (1985b), Heinz & Matiasek (1994), and Müller (2001) on structural and lexical case.

<sup>35</sup>Technical aspects of Haugereid's approach are discussed in Section 21.3.6.

<sup>36</sup>Haugereid has the possibility to impose valence restrictions on verbs, but he claims that he uses this possibility just in order to get a more efficient processing of his computer implementation (p. 13).

	- 'I borrow the squirrel.'

If we omit it, we get only the 'borrow' reading. So the grammar must specify for specific verbs that certain arguments are necessary for a certain verb meaning or a certain perspective on an event.

Synonyms with differing valence specifications include the minimal triplet mentioned earlier: *dine* is obligatorily intransitive (or takes an *on-*PP), *devour* is transitive, and *eat* can be used either intransitively or transitively (Dowty 1989: 89–90). Many other examples are given in Levin (1993) and Levin & Rappaport Hovav (2005).

In a phrasal constructionist approach one would have to assume phrasal patterns with the preposition or case, into which the verb is inserted. For (59b), the pattern includes a prepositional object with *auf* and an accusative NP, plus an entry for *warten* specifying that it can be inserted into such a structure (see Kroch & Joshi (1985: Section 5.2) for such a proposal in the framework of TAG). Since there are generalizations regarding verbs with such valence representations, one would be forced to have two inheritance hierarchies: one for lexical entries with their valence properties and another one for specific phrasal patterns that are needed for the specific constructions in which these lexical items can be used.

More often, proponents of neo-constructionist approaches either make proposals that are difficult to distinguish from lexical valence structures (see Section 21.3.7 below) or simply decline to address the problem. For instance, Lohndal (2012) writes:

An unanswered question on this story is how we ensure that the functional heads occur together with the relevant lexical items or roots. This is a general problem for the view that Case is assigned by functional heads, and I do not have anything to say about this issue here. (Lohndal 2012)

We think that getting case assignment right in simple sentences, without vast overgeneration of ill-formed word sequences, is a minimal requirement for a linguistic theory.

### **21.3.5 Expletives**

A final example for the irreducibility of valence to semantics are verbs that select for expletives and reflexive arguments of inherently reflexive verbs in German:

	- b. weil because (es) expl mir me.dat (vor before der the Prüfung) exam graut dreads 'because I am dreading the exam'
	- c. weil because er he es expl bis until zum to.the Professor professor bringt brings 'because he made it to professor'

The lexical heads in (64) need to contain information about the expletive subjects/objects and/or reflexive pronouns that do not fill semantic roles. Note that German allows for subjectless predicates and hence the presence of expletive subjects cannot be claimed to follow from general principles. (64c) is an example with an expletive object. Explanations referring to the obligatory presence of a subject would fail on such examples in any case. Furthermore it has to be ensured that *erholen* is not realized in the [Sbj IntrVerb] construction for intransitive verbs or respective functional categories in a Minimalist setting although the relation *erholen*′ (*relax*′ ) is a one-place predicate and hence *erholen* is semantically compatible with the construction.

# **21.3.6 An exoskeletal approach**

In what follows I discuss Haugereid's proposal (2007) in more detail. His analysis has all the high-level problems that were mentioned in the previous subsections, but since it is worked out in detail it is interesting to see its predictions.

Haugereid (2007), working in the framework of HPSG, suggests an analysis along the lines of Borer (2005) where the meaning of an expression is defined as depending on the arguments that are present. He assumes that there are five argument slots that are assigned to semantic roles as follows:


Here, antecedent is a more general role that stands for instrument, comitative, manner and source. The roles Arg1–Arg3 correspond to subject and objects. Arg4 is a resultative predicate of the end of a path. Arg4 can be realized by a PP, an AP or an NP. (65) gives examples for the realization of Arg4:

	- b. John hammered the metal *flat*.
	- c. He painted the car *a brilliant red*.

Whereas Arg4 follows the other participants in the causal chain of events, the antecedent precedes the patient in the order of events. It is realized as a PP. (66) is an example of the realization of Arg5:

(66) John punctured the balloon *with a needle*.

Haugereid now assumes that argument frames consist of these roles. He provides the examples in (67):


Haugereid points out that multiple verbs can occur in multiple argument frames. He provides the variants in (68) for the verb *drip*:


He proposes the inheritance hierarchy in Figure 21.4 in order to represent all possible argument combinations. The Arg5 role is omitted due to space considerations.

Haugereid assumes binary-branching structures where arguments can be combined with a head in any order. There is a dominance schema for each argument role. The schema realizing the argument role 3 provides a link value *arg3*+. If the argument role 2 is provided by another schema, we arrive at the frame *arg23*. For unergative intransitive verbs, it is possible to determine that it has an argument frame of *arg1*. This frame is only compatible with the types *arg1*+, *arg2*−, *arg3*− and *arg4*−. Verbs that have an optional

Figure 21.4: Hierarchy of argument frames following Haugereid (2007)

object are assigned to *arg1-12* according to Haugereid. This type allows for the following combinations: *arg1*+, *arg2*−, *arg3*− and *arg4*− such as *arg1*+, *arg2*+, *arg3*− and *arg4*−.

This approach comes very close to an idea by Goldberg: verbs are underspecified with regard to the sentence structures in which they occur and it is only the actual realization of arguments in the sentence that decides which combinations of arguments are realized. One should bear in mind that the hierarchy in Figure 21.4 corresponds to a considerable disjunction: it lists all possible realizations of arguments. If we say that *essen* 'to eat' has the type *arg1-12*, then this corresponds to the disjunction *arg1* ∨ *arg12*. In addition to the information in the hierarchy above, one also requires information about the syntactic properties of the arguments (case, the form of prepositions, verb forms in verbal complements). Since this information is in part specific to each verb (see Section 21.1), it cannot be present in the dominance schemata and must instead be listed in each individual lexical entry. For the lexical entry for *warten auf* 'wait for', there must be information about the fact that the subject has to be an NP and that the prepositional object is an *auf* -PP with accusative. The use of a type hierarchy then allows one to elegantly encode the fact that the prepositional object is optional. The difference to a disjunctively specified comps list with the form of (69) is just a matter of formalization.

(69) comps ⟨ NP[*str*] ⟩ ∨ ⟨ NP[*str*], PP[*auf* , *acc*] ⟩

Since Haugereid's structures are binary-branching, it is possible to derive all permutations of arguments (70a–b), and adjuncts can be attached to every branching node (70c–d).

	- b. dass that [arg2 Pizza pizza [arg1 keiner nobody isst]] eats

Haugereid has therefore found solutions for some of the problems in Goldberg's analysis that were pointed out in Müller (2006). Nevertheless, there are a number of other problems, which I will discuss in what follows. In Haugereid's approach, nothing is said about the composition of meaning. He follows the so-called Neo-Davidsonian approach. In this kind of semantic representation, arguments of the verb are not directly represented on the verb. Instead, the verb normally has an event argument and the argument roles belonging to the event in question are determined in a separate predication. (71) shows two alternative representations, where *e* stands for the event variable.

	- b. *eat*′ (e, x, y) ∧ *man*′ (x) ∧ *pizza*′ (y)
	- c. *eat*′ (e) ∧ *agent*(e,x) ∧ *theme*(e,y) ∧ *man*′ (x) ∧ *pizza*′ (y)

Haugereid adopts Minimal Recursion Semantics (MRS) as his semantic formalism (see also Section 9.1.6 and 19.3). The fact that arguments belong to a particular predicate is represented by the fact that the relevant predicates have the same handle. The representation in (71c) corresponds to (72):

(72) h1:*essen*′ (e), h1:*arg1*(x), h1:*arg2*(y), h2:*mann*′ (x), h3:*pizza*′ (y)

This analysis captures Goldberg's main idea: meaning arises from particular constituents being realized together with a head.

For the sentence in (73a), Haugereid (2007, p. c.) assumes the semantic representation in (73b):<sup>37</sup>

	- b. h1:*mann*′ (x), h2:*teich*′ (y), h3:*leer*′ (e), h4:*fischen*′ (e2), h4:*arg1*(x), h4:*arg2*(y), h4:*arg4*(h3)

In (73b), the *arg1*, *arg2* and *arg4* relations have the same handle as *fischen*′ . Following Haugereid's definitions, this means that *arg2* is the patient of the event. In the case of (73a), this makes incorrect predictions since the accusative element is not a semantic

<sup>37</sup>See Haugereid (2009: 165) for an analysis of the Norwegian examples in (i).

<sup>(</sup>i) Jon Jon maler paints veggen wall.def rød. red 'Jon paints the wall red.'

argument of the main verb. It is a semantic argument of the secondary predicate *leer* 'empty' and has been raised to the object of the resultative construction. Depending on the exact analysis assumed, the accusative object is either a syntactic argument of the verb or of the adjective, however, it is never a semantic argument of the verb. In addition to this problem, the representation in (73b) does not capture the fact that *leer* 'empty' predicates over the object. Haugereid (2007, p.c.) suggests that this is implicit in the representation and follows from the fact that all *arg4*s predicate over all *arg2*s. Unlike Haugereid's analysis, analyses using lexical rules that relate a lexical item of a verb to another verbal item with a resultative meaning allow for a precise specification of the semantic representation that then captures the semantic relation between the predicates involved. In addition, the lexical rule-based analysis makes it possible to license lexical items that do not establish a semantic relation between the accusative object and the verb (Wechsler 1997, Wechsler & Noh 2001; Müller 2002a: Chapter 5).

Haugereid sketches an analysis of the syntax of the German clause and tackles active/passive alternations. However, certain aspects of the grammar are not elaborated on. In particular, it remains unclear how complex clauses containing AcI verbs such as *sehen* 'to see' and *lassen* 'to let' should be analyzed. Arguments of embedded and embedding verbs can be permuted in these constructions. Haugereid (2007, p. c.) assumes special rules that allow to saturate arguments of more deeply embedded verbs, for example, a special rule that combines an *arg2* argument of an argument with a verb. In order to combine *das Nilpferd* and *nicht füttern helfen lässt* in sentences such as (74), he is forced to assume a special grammatical rule that combines an argument of a doubly embedded verb with another verb:

(74) weil because Hans Hans Cecilia Cecilia John John das the Nilpferd hippo nicht not füttern feed helfen help lässt let 'because Hans is not letting Cecilia help John feed the hippo.'

In Müller (2004d: 220), I have argued that embedding under complex-forming predicates is only constrained by performance factors (see also Section 12.6.3). In German, verbal complexes with more than four verbs are barely acceptable. Evers (1975: 58–59) has pointed out, however, that the situation in Dutch is different since Dutch verbal complexes have a different branching: in Dutch, verbal complexes with up to five verbs are possible. Evers attributes this difference to a greater processing load for German verbal complexes (see also Gibson 1998: Section 3.7). Haugereid would have to assume that there are more rules for Dutch than for German. In this way, he would give up the distinction between competence and performance and incorporate performance restrictions directly into the grammar. If he wanted to maintain a distinction between the two, then Haugereid would be forced to assume an infinite number of schemata or a schema with functional uncertainty since depth of embedding is only constrained by performance factors. Existing HPSG approaches to the analysis of verbal complexes do without functional uncertainty (Hinrichs & Nakazawa 1994a). Since such raising analyses are required for object raising anyway (as discussed above), they should be given preference.

Summing up, it must be said that Haugereid's exoskeletal approach does account for different orderings of arguments, but it neither derives the correct semantic representations nor does it offer a solution for the problem of idiosyncratic selection of arguments and the selection of expletives.

# **21.3.7 Is there an alternative to lexical valence structure?**

The question for theories denying the existence of valence structure is what replaces it to explain idiosyncratic lexical selection. In her exoskeletal approach, Borer (2005) explicitly rejects lexical valence structures. But she posits post-syntactic interpretive rules that are difficult to distinguish from them. To explain the correlation of *depend* with an *on*-PP, she posits the following interpretive rule (Borer 2005: Vol. II, p. 29):

(75) MEANING ⇔ <sup>9</sup> + [⟨ ⟩]

Borer refers to all such cases of idiosyncratic selection as idioms. In a rule such as (75), "MEANING is whatever the relevant idiom means" (Borer 2005: Vol. II, p. 27). In (75), <sup>9</sup> is the "phonological index" of the verb *depend* and "corresponds to an open value that must be assigned range by the f-morph *on*" (Borer 2005: Vol. II, p. 29), where fmorphs are function words or morphemes. Hence this rule brings together much the same information as the lexical valence structure in (58c). Discussing such "idiom" rules, Borer writes

Although by assumption a listeme cannot be associated with any grammatical properties, one device used in this work has allowed us to get around the formidable restrictions placed on the grammar by such a constraint – the formation of idioms. […] Such idiomatic specification could be utilized, potentially, not just for *arrive* and *depend on*, but also for obligatorily transitive verbs […], for verbs such as *put*, with their obligatory locative, and for verbs which require a sentential complement.

The reader may object that subcategorization, of sorts, is introduced here through the back door, with the introduction, in lieu of lexical syntactic annotation, of an articulated listed structure, called an *idiom*, which accomplishes, de facto, the same task. The objection of course has some validity, and at the present state of the art, the introduction of idioms may represent somewhat of a concession. (Borer 2005: Vol. II, p. 354–355)

Borer goes on to pose various questions for future research, related to constraining the class of possible idioms. With regard to that research program it should be noted that a major focus of lexicalist research has been narrowing the class of subcategorization and extricating derivable properties from idiosyncratic subcategorization. Those are the functions of HPSG lexical hierarchies, for example.

## **21.3.8 Summary**

In Sections 21.3.2–21.3.5 we showed that the question of which arguments must be realized in a sentence cannot be reduced to semantics and world knowledge or to general

facts about subjects. The consequence is that valence information has to be connected to lexical items. One therefore must either assume a connection between a lexical item and a certain phrasal configuration as in Croft's approach (2003) and in LTAG or assume our lexical variant. In a Minimalist setting the right set of features must be specified lexically to ensure the presence of the right case assigning functional heads. This is basically similar to the lexical valence structures we are proposing here, except that it needlessly introduces various problems discussed above, such as the problem of coordination raised in Section 21.2.1.

# **21.4 Relations between constructions**

On the lexical rules approach, word forms are related by lexical rules: a verb stem can be related to a verb with finite inflection and to a passive verb form; verbs can be converted to adjectives or nouns; and so on. The lexical argument structure accompanies the word and can be manipulated by the lexical rule. In this section we consider what can replace such rules within a phrasal or ASC approach.

# **21.4.1 Inheritance hierarchies for constructions**

For each valence structure that the lexicalist associates with a root lexeme (transitive, ditransitive, etc.), the phrasal approach requires multiple phrasal constructions, one to replace each lexical rule or combination of lexical rules that can apply to the word. Taking ditransitives, for example, the phrasal approach requires an active-ditransitive construction, a passive-ditransitive construction, and so on, to replace the output of every lexical rule or combination of lexical rules applied to a ditransitive verb. (Thus Bergen & Chang (2005: 169–170) assume an active-ditransitive and a passive-ditransitive construction and Kallmeyer & Osswald (2012: 171–172) assume active and passive variants of the transitive construction.) On that view some of the active voice constructions for German would be:

	- b. Nom Acc V
	- c. Nom Dat V
	- d. Nom Dat Acc V

The passive voice constructions corresponding to (76) would be:

(77) a. V V-Aux


Merely listing all these constructions is not only uneconomical but fails to capture the obvious systematic relation between active and passive constructions. Since phrasalists reject both lexical rules and transformations, they need an alternative way to relate

#### 21 Phrasal vs. lexical analyses

phrasal configurations and thereby explain the regular relation between active and passive. The only proposals to date involve the use of inheritance hierarchies, so let us examine them.

Researchers working in various frameworks, both with lexical and phrasal orientation, have tried to develop inheritance-based analyses that could capture the relation between valence patterns such as those in (76) and (77) (see for instance Kay & Fillmore 1999: 12; Michaelis & Ruppenhofer 2001: Chapter 4; Candito 1996; Clément & Kinyon 2003: 188; Kallmeyer & Osswald 2012: 171–172; Koenig 1999: Chapter 3; Davis & Koenig 2000, Kordoni 2001 for proposals in CxG, TAG, and HPSG). The idea is that a single representation (lexical or phrasal, depending on the theory) can inherit properties from multiple constructions. In a phrasal approach the description of the pattern in (76b) inherits from the transitive and the active construction and the description of (77b) inherits from both the transitive and the passive constructions. Figure 21.5 illustrates the inheritance-based lexical approach: a lexical entry for a verb such as *read* or *eat* is combined with either an active or passive representation. The respective representations for the active and passive are responsible for the expression of the arguments. As was already discussed in

Figure 21.5: Inheritance Hierarchy for active and passive

Section 10.2, inheritance-based analyses cannot account for multiple changes in valence as for instance the combination of passive and impersonal construction that can be observed in languages like Lithuanian (Timberlake 1982: Section 5), Irish (Noonan 1994), and Turkish (Özkaragöz 1986). Özkaragöz's Turkish examples are repeated here with the original glossing as (78) for convenience:



Another example from Section 10.2 that cannot be handled with inheritance is multiple causativization in Turkish. Turkish allows double and even triple causativization (Lewis 1967: 146):

(79) Öl-dür-t-tür-t- (Turkish)

'to cause somebody to cause somebody to kill somebody'

An inheritance-based analysis would not work, since inheriting the same information several times does not add anything new. Krieger & Nerbonne (1993) make the same point with respect to derivational morphology in cases like *preprepreversion*: inheriting information about the prefix *pre*- twice or more often, does not add anything.

So assuming phrasal models, the only way to capture the generalization with regard to (76) and (77) seems to be to assume GPSG-like metarules that relate the constructions in (76) to the ones in (77). If the constructions are lexically linked as in LTAG, the respective mapping rules would be lexical rules. For approaches that combine LTAG with the Goldbergian plugging idea such as the one by Kallmeyer & Osswald (2012) one would have to have extended families of trees that reflect the possibility of having additional arguments and would have to make sure that the right morphological form is inserted into the respective trees. The morphological rules would be independent of the syntactic structures in which the derived verbal lexemes could be used. One would have to assume two independent types of rules: GPSG-like metarules that operate on trees and morphological rules that operate on stems and words. We believe that this is an unnecessary complication and apart from being complicated the morphological rules would not be acceptable as form-meaning pairs in the CxG sense since one aspect of the form namely that additional arguments are required is not captured in these morphological rules. If such morphological rules were accepted as proper constructions then there would not be any reason left to require that the arguments have to be present in a construction in order for it to be recognizable, and hence, the lexical approach would be accepted.<sup>38</sup>

Inheritance hierarchies are the main explanatory device in Croft's Radical Construction Grammar (Croft 2001). He also assumes phrasal constructions and suggests representing these in a taxonomic network (an inheritance hierarchy). He assumes that every idiosyncrasy of a linguistic expression is represented on its own node in this kind of network. Figure 21.6 shows part of the hierarchy he assumes for sentences. There are

Figure 21.6: Classification of phrasal patterns in Croft (2001: 26)

<sup>38</sup>Compare the discussion of *Totschießen* 'shoot dead' in example (94) below.

sentences with intransitive verbs and sentences with transitive verbs. Sentences with the form [Sbj kiss Obj] are special instances of the construction [Sbj TrVerb Obj]. The [Sbj kick Obj] construction also has further sub-constructions, namely the constructions [Sbj kick the bucket] and [Subj kick the habit]. Since constructions are always pairs of form and meaning, this gives rise to a problem: in a normal sentence with *kick*, there is a kicking relation between the subject and the object of *kick*. This is not the case for the idiomatic use of *kick* in (80):

(80) He kicked the bucket.

This means that there cannot be a normal inheritance relation between the [Sbj kick Obj] and the [Sbj kick the bucket] construction. Instead, only parts of the information may be inherited from the [Sbj kick Obj] construction. The other parts are redefined by the sub-construction. This kind of inheritance is referred to as *default inheritance*.

*kick the bucket* is a rather fixed expression, that is, it is not possible to passivize it or front parts of it without losing the idiomatic reading (Nunberg, Sag & Wasow 1994: 508). However, this is not true for all idioms. As Nunberg, Sag & Wasow (1994: 510) have shown, there are idioms that can be passivized (81a) as well as realizations of idioms where parts of idioms occur outside of the clause (81b).

	- b. The strings [that Pat pulled] got Chris the job.

The problem is now that one would have to assume two nodes in the inheritance hierarchy for idioms that can undergo passivization since the realization of the constituents is different in active and passive variants but the meaning is nevertheless idiosyncratic. The relation between the active and passive form would not be captured. Kay (2002) has proposed an algorithm for computing objects (Construction-like objects = CLOs) from hierarchies that then license active and passive variants. As I have shown in Müller (2006: Section 3), this algorithm does not deliver the desired results and it is far from straightforward to improve it to the point that it actually works. Even if one were to adopt the changes I proposed, there are still phenomena that cannot be described using inheritance hierarchies (see Section 10.2 in this book).

A further interesting point is that the verbs have to be explicitly listed in the constructions. This begs the question of how constructions should be represented where the verbs are used differently. If a new node in the taxonomic network is assumed for cases like (82), then Goldberg's criticism of lexical analyses that assume several lexical entries for a verb that can appear in various constructions<sup>39</sup> will be applicable here: one would have to assume constructions for every verb and every possible usage of that verb.

(82) He kicked the bucket into the corner.

<sup>39</sup>Note the terminology: I used the word *lexical entry* rather than *lexical item*. The HPSG analysis uses lexical rules that correspond to Goldberg's templates. What Goldberg criticizes is lexical rules that relate lexical entries, not lexical rules that licence new lexical items, which may be stored or not. HPSG takes the latter approach to lexical rules. See Section 9.2.

For sentences with negation, Croft assumes the hierarchy with multiple inheritance given in Figure 21.7. The problem with this kind of representation is that it remains

Figure 21.7: Interaction of phrasal patterns following Croft (2001: 26)

unclear as to how the semantic embedding of the verb meaning under negation can be represented. If all constructions are pairs of form and meaning, then there would have to be a semantic representation for [Sbj IntrVerb] (cont value or sem value). Similarly, there would have to be a meaning for [Sbj Aux-n't Verb]. The problem now arises that the meaning of [Sbj IntrVerb] has to be embedded under the meaning of the negation and this cannot be achieved directly using inheritance since X and not(X) are incompatible. There is a technical solution to this problem using auxiliary features. Since there are a number of interactions in grammars of natural languages, this kind of analysis is highly implausible if one claims that features are a direct reflection of observable properties of linguistic objects. For a more detailed discussion of approaches with classifications of phrasal patterns, see Müller (2010b) as well as Müller (2007a: Section 18.3.2.2) and for the use of auxiliary features in inheritance-based analyses of the lexicon, see Müller (2007a: Section 7.5.2.2).

Figure 21.8 shows Ziem & Lasch's hierarchy for German sentences with the verbs *lachen* 'laugh', *weinen* 'cry', *drücken* 'push', *mögen* 'like' that is similar to Croft's hierarchy in spirit (Ziem & Lasch 2013: 97). Things that I mentioned with respect to Croft's

Figure 21.8: Inheritance hierarchy for clauses by Ziem & Lasch (2013: 97)

hierarchy also apply to this hierarchy for German and in fact they demonstrate the problems even more clearly. The idiomatic usages of *den Preis drücken* 'to beat down the price' and *die Schulbank drücken* 'to go to school' are not as fixed as the hierarchy seems to

suggest. For example, *den Preis drücken* may appear with an indefinite article and there may be NP-internal modification:

	- b. So this.way kann can man one den the schon yet recht right guten good Preis price weiter further drücken. press 'This way the rather good price can be reduced even further.'

Note also that it would be wrong to claim that all instances of the *den Preis drücken* involve the realization of an NP with nominative case.

	- a such price to press is not easy
	- 'It is not easy to beat down such a good price.'

Since Construction Grammar does not use empty elements, (84) cannot be explained without the assumption of a separate phrasal construction, one without an NP[*nom*].

So what has to be said about the special usage of *drücken* 'to press' is that *drücken* has to cooccur with an NP (definite or indefinite) containing *Preis* 'price'. If one insists on a phrasal representation like the one in Figure 21.8, one has to explain how the clauses represented in this figure are related to other clauses in which the NP contains an adjective. One would be forced to assume relations between complex linguistic objects (basically something equivalent to transformations with the power that was assumed in the 50ies, see p. 86) or one would have to assume that *den Preis* 'the price' is not a more specific description of the NP[*acc*] but rather corresponds to some underspecified representation that allows the integration of adjectives between determiner and noun (see the discussion of the phrasal lexical item in (40) on p. 336. A third way to capture the relation between *den Preis drücken* 'reduce the price' and *den guten Preis drücken* 'reduce the good price' would be to assume a TAG-style grammar that can take a tree, break it up and insert an adjective in the middle. I never saw a discussion of these issues anywhere in the literature. Hierarchies like the ones in Figure 21.6 and Figure 21.8 seem to classify some attested examples but they do not say anything about general grammar. Note for instance, that (83a) differs from the representation in Figure 21.8 by having an additional modal verb and by having the NP containing *Preis* fronted. Such frontings can be nonlocal. How is this accounted for? If the assumption is that the elements in Figure 21.8 are not ordered, what would be left of the original constructional motivation? If the assumptions that elements of the phrasal constructions may be discontinuous, what are the restrictions on this (see Section 10.6.4.7 and Section 11.7.1)? The alternative to classifying (some of the) possible phrases is the hierarchy of lexical types given in Figure 21.9 on the next page. The respective lexical items have valence specifications that allow them to be used in certain configurations: arguments can be scrambled, can be extracted, passivization lexical rules may apply and so on.

Figure 21.9: Lexical alternative to inheritance hierarchy in Figure 21.8

### **21.4.2 Mappings between different levels of representations**

Culicover & Jackendoff (2005: Section 6.3) suggest that passive should be analyzed as one of several possible mappings from the Grammatical Function tier to the surface realization of arguments. Surface realizations of referential arguments can be NPs in a certain case, with certain agreement properties, or in a certain position. While such analyses that work by mapping elements with different properties onto different representations are common in theories like LFG and HPSG (Koenig 1999, Bouma, Malouf & Sag 2001), a general property of these analyses is that one needs one level of representation per interaction of phenomena (arg-st, sem-arg, add-arg in Koenig's proposal, arg-st, deps, spr, comps in Bouma, Malouf and Sag's proposal). This was discussed extensively in Müller (2007a: Section 7.5.2.2) with respect to extensions that would be needed for Koenig's analysis.

Since Culicover & Jackendoff argue for a phrasal model, we will discuss their proposal here. Culicover & Jackendoff assume a multilayered model in which semantic representations are linked to grammatical functions, which are linked to tree positions. Figure 21.10 shows an example for an active sentence. GF stands for Grammatical Function. Culicover

Figure 21.10: Linking grammatical functions to tree positions: active

& Jackendoff (2005: 204) explicitly avoid names like Subject and Object since this is crucial for their analysis of the passive to work. They assume that the first GF following a

bracket is the subject of the clause the bracket corresponds to (p. 195–196) and hence has to be mapped to an appropriate tree position in English. Note that this view of grammatical functions and obliqueness does not account for subjectless sentences that are possible in some languages, for instance in German.<sup>40</sup>

Regarding the passive, the authors write:

we wish to formulate the passive not as an operation that deletes or alters part of the argument structure, but rather as a piece of structure in its own right that can be unified with the other independent pieces of the sentence. The result of the unification is an alternative licensing relation between syntax and semantics. (Culicover & Jackendoff 2005: 203)

They suggest the following representation of the passive:

(85) *[GF* > [*GF* …]*]* ⇔ *[* …V + pass …(by NP ) …*]*

The italicized parts are the normal structure of the sentence and the non-italicized parts are an overlay on the normal structure, that is, additional constraints that have to hold in passive sentences. Figure 21.11 shows the mapping of the example discussed above that corresponds to the passive.

)

Figure 21.11: Linking grammatical functions to tree positions: passive

Although Culicover & Jackendoff emphasize the similarity between their approach and Relational Grammar (Perlmutter 1983), there is an important difference: in Relational Grammar additional levels (strata) can be stipulated if additional remappings are needed. In Culicover & Jackendoff's proposal there is no additional level. This causes problems for the analysis of languages which allow for multiple argument alternations. Examples from Turkish were provided in (78). Approaches that assume that the personal passive is the unification of a general structure with a passive-specific structure will not be able to capture this, since they committed to a certain structure too early. The problem for approaches that state syntactic structure for the passive is that such a structure, once stated, cannot be modified. Culicover & Jackendoff's proposal works in this respect since

<sup>40</sup>Of course one could assume empty expletive subjects, as was suggested by Grewendorf (1995: 1311), but empty elements and especially those without meaning are generally avoided in the constructionist literature. See Müller (2010a: Section 3.4, Section 11.1.1.3) for further discussion.

there are no strong constraints in the right-hand side of their constraint in (85). But there is a different problem: when passivization is applied the second time, it has to apply to the innermost bracket, that is, the result of applying (85) should be:

$$\text{(86)} \quad [\text{GF}\_i \text{} \succ [\text{GF}\_j \dots \text{}]]\_k \Leftrightarrow [\dots \text{V}\_k + \text{pass} \dots \text{(by } \text{NP}\_i \text{)} \dots \text{(by } \text{NP}\_j \text{)} \dots]\_k$$

This cannot be done with unification, since unification checks for compatibility and since the first application of passive was possible it would be possible for the second time as well. Dots in representations are always dangerous and in the example at hand one would have to make sure that NP and NP are distinct, since the statement in (85) just says there has to be a *by*-PP somewhere. What is needed instead of unification would be something that takes a GF representation and searches for the outermost bracket and then places a bracket to the left of the next GF. But this is basically a rule that maps one representation onto another one, just like lexical rules do.

If Culicover & Jackendoff want to stick to a mapping analysis, the only option to analyze the data seems to be to assume an additional level for impersonal passives from which the mapping to phrase structure is done. In the case of Turkish sentences like (87), which is a personal passive, the mapping to this level would be the identity function.

(87) Arkadaş-ım friend-my bu this oda-da room-loc döv-ül-dü. hit-pass-aor 'My friend is beaten (by one) in this room.'

In the case of passivization + impersonal construction, the correct mappings would be implemented by two mappings between the three levels that finally result in a mapping as the one that is seen in (78b), repeated here as (88) for convenience.

(88) Bu this oda-da room-loc döv-ül-ün-ür. hit-pass-pass-aor 'One is beaten (by one) in this room.'

Note that passivization + impersonal construction is also problematic for purely inheritance based approaches. What all these approaches can suggest though is that they just stipulate four different relations between argument structure and phrase structure: active, passive, impersonal construction, passive + impersonal construction. But this misses the fact that (88) is an impersonal variant of the passive in (87).

In contrast, the lexical rule-based approach suggested by Müller (2003b) does not have any problems with such multiple alternations: the application of the passivization lexical rule suppresses the least oblique argument and provides a lexical item with the argument structure of a personal passive. Then the impersonal lexical rule applies and suppresses the now least oblique argument (the object of the active clause). The result is impersonal constructions without any arguments as the one in (88).

### **21.4.3 Is there an alternative to lexical rules?**

In this section we have reviewed the attempts to replace lexical rules with methods of relating constructions. These attempts have not been successful, in our assessment. We believe that the essential problem with them is that they fail to capture the derivational character of the relationship between certain word forms. Alternations signaled by passive voice and causative morphology are relatively simple and regular when formulated as operations on lexical valence structures that have been abstracted from their phrasal context. But non-transformational rules or systems formulated on the phrasal structures encounter serious problems that have not yet been solved.

# **21.5 Further problems for phrasal approaches**

Müller (2006) discussed the problems shared by proposals that assume phrasal constructions to be a fixed configuration of adjacent material as for instance the one by Goldberg & Jackendoff (2004). I showed that many argument structure constructions allow great flexibility as far as the order of their parts is concerned. Back then I discussed resultative constructions in their interaction with free datives, passive and other valence changing phenomena and showed that for all these constructions licensed by such interactions the construction parts can be scrambled, the verb can appear in different positions, arguments can be extracted and so on. The following subsection discusses particle verbs, which pose similar problems for theories that assume a phrasal construction with fixed order of verb and particle.

# **21.5.1 Particle verbs and commitment to phrase structure configurations**

A general problem of approaches that assume phrase structure configurations paired with meaning is that the construction may appear in different contexts: the construction parts may be involved in derivational morphology (as discussed in the previous subsection) or the construction parts may be involved in dislocations. A clear example of the latter type is the phrasal analysis of particle verbs that was suggested by Booij (2002: Section 2; 2012) and Blom (2005), working in the frameworks of Construction Grammar and LFG, respectively. The authors working on Dutch and German assume that particle verbs are licensed by phrasal constructions (pieces of phrase structure) in which the first slot is occupied by the particle.

(89) [ X [ ]<sup>V</sup> ]V ′ where X = P, Adv, A, or N

Examples for specific Dutch constructions are:

′

(90) a. [ af [ ]<sup>V</sup> ]V ′ b. [ door [ ]<sup>V</sup> ]V c. [ op [ ]<sup>V</sup> ]V ′

This suggestion comes with the claim that particles cannot be fronted. This claim is made frequently in the literature, but it is based on introspection and wrong for languages like Dutch and German. On Dutch see Hoeksema (1991: 19), on German, Müller

(2002a,c, 2003c, 2007c).<sup>41</sup> A German example is given in (91); several pages of attested examples can be found in the cited references and some more complex examples will also be discussed in Section 21.7.3 on page 662.

(91) *Los* part damit there.with *geht* goes es it schon already am at.the 15. 15 April.<sup>42</sup> April 'It already starts on April the 15th.'

Particle verbs are mini-idioms. So the conclusion is that idiomatic expressions that allow for a certain flexibility in order should not be represented as phrasal configurations describing adjacent elements. For some idioms, a lexical analysis along the lines of Sag (2007) seems to be required.<sup>43</sup> The issue of particle verbs will be taken up in Section 21.7.3 again, where we discuss evidence for/against phrasal analyses from neuroscience.

# **21.6 Arguments from language acquisition**

The question whether language acquisition is pattern-based and hence can be seen as evidence for the phrasal approach has already been touched upon in the Sections 16.3 and 16.4. It was argued that constructions can be realized discontinuously in coordinations and hence it is the notion of dependency that has to be acquired; acquiring simple continuous patterns is not sufficient.

Since the present discussion about phrasal and lexical approaches deals with specific proposals, I would like to add two more special subsections: Section 21.6.1 deals with the recognizability of constructions and Section 21.6.2 discusses specific approaches to coordination in order to demonstrate how frameworks deal with the discontinuous realization of constructions.

## **21.6.1 Recognizability of constructions**

I think that a purely pattern-based approach is weakened by the existence of examples like (92):

	- b. John tried to be loved.

Although no argument of *sleep* is present in the phrase *to sleep* and neither a subject nor an object is realized in the phrase *to be loved*, both phrases are recognized as phrases containing an intransitive and a transitive verb, respectively.<sup>44</sup>

<sup>41</sup>Some more fundamental remarks on introspection and corpus data with relation to particle verbs can also be found in Müller (2007c) and Meurers & Müller (2009).

<sup>42</sup>taz, 01.03.2002, p. 8, see also Müller (2005c: 313).

<sup>43</sup>Note also that the German example is best described as a clause with a complex internally structured constituent in front of the finite verb and it is doubtful whether linearization-based proposals like the ones in Kathol (1995: 244–248) or Wetta (2011) can capture this. See also the discussion of multiple frontings in connection to Dependency Grammar in Section 11.7.1.

<sup>44</sup>Constructionist theories do not assume empty elements. Of course, in the GB framework the subject would be realized by an empty element. So it would be in the structure, although inaudible.

The same applies to arguments that are supposed to be introduced/licensed by a phrasal construction: in (93) the resultative construction is passivized and then embedded under a control verb, resulting in a situation in which only the result predicate (*tot* 'dead') and the matrix verb (*geschossen* 'shot') are realized overtly within the local clause, bracketed here:

(93) Der the kranke sick Mann man wünschte wished sich, self [totgeschossen dead.shot zu to werden].<sup>45</sup> be 'The sick man wanted to be shot dead.'

Of course passivization and control are responsible for these occurrences, but the important point here is that arguments can remain unexpressed or implicit and nevertheless a meaning usually connected to some overt realization of arguments is present (Müller 2007b: Section 4). So, what has to be acquired by the language learner is that when a result predicate and a main verb are realized together, they contribute the resultative meaning. To take another example, NP arguments that are usually realized in active resultative constructions may remain implicit in nominalizations like the ones in (94):

	- b. Wir we lassen let heut today das the Totgeschieße, annoying.repeated.shooting.dead Weil since man one sowas such.thing heut today nicht not tut. does Und and wer who einen a Tag day sich self ausruht, rests Der this schießt shoots morgen tomorrow doppelt twice gut.<sup>47</sup> good 'We do not shoot anybody today, since one does not do this today, and those who rest a day shoot twice as well tomorrow.'

The argument corresponding to the patient of the verb (the one who is shot) can remain unrealized, because of the syntax of nominalizations. The resultative meaning is still understood, which shows that it does not depend upon the presence of a resultative construction involving Subj V Obj and Obl.

# **21.6.2 Coordination and discontinuousness**

The following subsection deals with analyses of coordination in some of the frameworks that were introduced in this book. The purpose of the section is to show that simple

<sup>45</sup>Müller (2007b: 387).

<sup>46</sup>https://www.elitepartner.de/forum/wie-gehen-die-maenner-mit-den-veraenderten-anspruechen-derfrauen-um-26421-6.html. 26.03.0212.

<sup>47</sup>Gedicht für den Frieden, Oliver Kalkofe, http://www.golyr.de/oliver-kalkofe/songtext-gedicht-fuer-denfrieden-417329.html. 2018-02-20.

phrasal patterns have to be broken up in coordination structures. This was already mentioned in Section 16.3, but I think it is illuminative to have a look at concrete proposals.

In Categorial Grammar, there is a very elegant treatment of coordination (see Steedman 1991). A generalization with regard to so-called symmetric coordination is that two objects with the same syntactic properties are combined to an object with those properties. We have already encountered the relevant data in the discussion of the motivation for feature geometry in HPSG on page 277. Their English versions are repeated below as (95):

	- b. He knows and loves this record.
	- c. He is dumb and arrogant.

Steedman (1991) analyzes examples such as those in (95) with a single rule:

$$\text{(96)}\quad \text{X conj X} \Rightarrow \text{X}$$

This rule combines two categories of the same kind with a conjunction in between to form a category that has the same category as the conjuncts.<sup>48</sup> Figure 21.12 shows the analysis of (95a) and Figure 21.13 gives an analysis of the corresponding English example of (95b).


Figure 21.12: Coordination of two NPs in Categorial Grammar


Figure 21.13: Coordination of two transitive verbs in Categorial Grammar

<sup>48</sup>Alternatively, one could analyze all three examples using a single lexical entry for the conjunction *and*: *and* is a functor that takes a word or phrase of any category to its right and after this combination then needs to be combined with an element of the same category to its left in order to form the relevant category after combining with this second element. This means that the category for *und* would have the form (X\X)/X. This analysis does not require any coordination rules. If one wants to assume, as is common in GB/MP, that every structure has a head, then a headless analysis that assumes a special rule for coordination like the one in (96) would be ruled out.

If we compare this analysis to the one that would have to be assumed in traditional phrase structure grammars, it becomes apparent what the advantages are: one rule was required for the analysis of NP coordination where two NPs are coordinated to form an NP and another was required for the analysis of V coordination. This is not only undesirable from a technical point of view, neither does it capture the basic property of symmetric coordination: two symbols with the same syntactic category are combined with each other.

It is interesting to note that it is possible to analyze phrases such as (97) in this way:

(97) give George a book and Martha a record

In Section 1.3.2.4, we have seen that this kind of sentences is problematic for constituent tests. However, in Categorial Grammar, it is possible to analyze them without any problems if one adopts rules for type raising and composition as Dowty (1988) and Steedman (1991) do. In Section 8.5, we have already seen forward type raising as well as forward and backward composition. In order to analyze (97), one would require backward type raising repeated in (98) and backward composition repeated in (99):


Dowty's analysis of (97) is given in Figure 21.14. vp stands for s\np.


Figure 21.14: Gapping in Categorial Grammar

This kind of type-raising analysis was often criticized because raising categories leads to many different analytical possibilities for simple sentences. For example, one could first combine a type-raised subject with the verb and then combine the resulting constituent with the object. This would mean that we would have a [[S V] O] in addition to the standard [S [V O]] analysis. Steedman (1991) argues that both analyses differ in terms of information structure and it is therefore valid to assume different structures for the sentences in question.

I will not go into these points further here. However, I would like to compare Steedman's lexical approach to phrasal analyses: all approaches that assume that the ditransitive construction represents a continuous pattern encounter a serious problem with the examples discussed above. This can be best understood by considering the TAG analysis

of coordination proposed by Sarkar & Joshi (1996). If one assumes that [Sbj TransVerb Obj] or [S [V O]] constitutes a fixed unit, then the trees in Figure 21.15 form the starting point for the analysis of coordination.

Figure 21.15: Elementary trees for *knows* and *loves*

If one wants to use these trees/constructions for the analysis of (100), there are in principle two possibilities: one assumes that two complete sentences are coordinated or alternatively, one assumes that some nodes are shared in a coordinated structure.

(100) He knows and loves this record.

Abeillé (2006) has shown that it is not possible to capture all the data if one assumes that cases of coordination such as those in (100) always involve the coordination of two complete clauses. It is also necessary to allow for lexical coordination of the kind we saw in Steedman's analysis (see also Section 4.6.3). Sarkar & Joshi (1996) develop a TAG analysis in which nodes are shared in coordinate structures. The analysis of (100) can be seen in Figure 21.16. The subject and object nodes are only present once in this figure.

Figure 21.16: TAG analysis of *He knows and loves this record.*

The S nodes of both elementary trees both dominate the *he* NP. In the same way, the object NP node belongs to both VPs. The conjunction connects two verbs indicated by the thick lines. Sarkar and Joshi provide an algorithm that determines which nodes are

#### 21 Phrasal vs. lexical analyses

to be shared. The structure may look strange at first, but for TAG purposes, it is not the derived tree but rather the derivation tree that is important, since this is the one that is used to compute the semantic interpretation. The authors show that the derivation trees for the example under discussion and even more complex examples can be constructed correctly.

In theories such as HPSG and LFG where structure building is, as in Categorial Grammar, driven by valence, the above sentence is unproblematic: both verbs are conjoined and then the combination behaves like a simple verb. The analysis of this is given in Figure 21.17. This analysis is similar to the Categorial Grammar analysis in Figure 21.13.<sup>49</sup> With Goldberg's plugging analysis one could also adopt this approach to coordination:

Figure 21.17: Selection-based analysis of *He knows and loves this record.* in tree notation

here, *knows* and *loves* would first be plugged into a coordination construction and the result would then be plugged into the transitive construction. Exactly how the semantics of *knows and loves* is combined with that of the transitive construction is unclear since the meaning of this phrase is something like *and*′ (*know*′ (x, y), *love*′ (x, y)), that is, a complex event with at least two open argument slots x and y (and possibly additionally an event and a world variable depending on the semantic theory that is used). Goldberg would probably have to adopt an analysis such as the one in Figure 21.16 in order to maintain the plugging analysis.

Croft would definitely have to adopt the TAG analysis since the verb is already present in his constructions. For the example in (97), both Goldberg and Croft would have to draw from the TAG analysis in Figure 21.18 on the next page. The consequence of this is that one requires discontinuous constituents. Since coordination allows a considerable number of variants, there can be gaps between all arguments of constructions. An example with a ditransitive verb is given in (101):

(101) He gave George and sent Martha a record.

See Crysmann (2008) and Beavers & Sag (2004) for HPSG analyses that assume discontinuous constituents for particular coordination structures.

<sup>49</sup>A parallel analysis in Dependency Grammar is possible as well. Tesnière's original analysis was different though. See Section 11.6.2.1 for discussion.

Figure 21.18: TAG analysis of *He gave George a book and Martha a record.*

The result of these considerations is that the argument that particular elements occur next to each other and that this occurrence is associated with a particular meaning is considerably weakened. What competent speakers do acquire is the knowledge that heads must occur with their arguments somewhere in the utterance and that all the requirements of the heads involved have to somehow be satisfied (-Criterion, coherence/completeness, empty spr and comps list). The heads themselves need not necessarily occur directly adjacent to their arguments. See the discussion in Section 16.3 about pattern-based models of language acquisition.

The computation of the semantic contribution of complex structures such as those in Figure 21.18 is by no means trivial. In TAG, there is the derivation tree in addition to the derived tree that can then be used to compute the semantic contribution of a linguistic object. Construction Grammar does not have this separate level of representation. The question of how the meaning of the sentences discussed here is derived from their component parts still remains open for phrasal approaches.

Concluding the section on language acquisition, we assume that a valence representation is the result of language acquisition, since this is necessary for establishing the dependency relations in various possible configurations in an utterance. See also Behrens (2009: 439) for a similar conclusion.

# **21.7 Arguments from psycho- and neurolinguistics**

This section has three parts: in the first part we compare approaches which assume that valence alternations are modeled by lexical rules, underspecification, or disjunctions with phrasal approaches. In Subsection 21.7.2 part we discuss approaches to light verb constructions and Subsection 21.7.3 is devoted to neurolinguistic findings.

### **21.7.1 Lexical rules vs. phrasal constructions**

Goldberg (1995: Section 1.4.5) uses evidence from psycholinguistic experiments to argue against lexical approaches that use lexical rules to account for argument structure alternations: Carlson & Tanenhaus (1988) showed that sentences with true lexical ambiguity like those in (102) and sentences with two verbs with the same core meaning have different processing times.

(102) a. Bill set the alarm clock onto the shelf.

	- b. Bill loaded the truck with bricks.

Errors due to lexical ambiguity cause a bigger increase in processing time than errors in the use of the same verb. Experiments showed that there was a bigger difference in processing times for the sentences in (102) than for the sentences in (103). The difference in processing times between (103a) and (103b) would be explained by different preferences for phrasal constructions. In a lexicon-based approach one could explain the difference by assuming that one lexical item is more basic, that is, stored in the mental dictionary and the other is derived from the stored one. The application of lexical rules would be time consuming, but since the lexical items are related, the overall time consumption is smaller than the time needed to process two unrelated items (Müller 2002a: 405).

Alternatively one could assume that the lexical items for both valence patterns are the result of lexical rule applications. As with the phrasal constructions, the lexical rules would have different preferences. This shows that the lexical approach can explain the experimental results as well, so that they do not force us to prefer phrasal approaches.

Goldberg (1995: 18) claims that lexical approaches have to assume two variants of *load* with different meaning and that this would predict that *load* alternations would behave like two verbs that really have absolutely different meanings. The experiments discussed above show that such predictions are wrong and hence lexical analyses would be falsified. However, as was shown in Müller (2010a: Section 11.11.8.2), the argumentation contains two flaws: let's assume that the construction meaning of the construction that licenses (103a) is C<sup>1</sup> and the construction meaning of the construction that licenses (103b) is C<sup>2</sup> . Under such assumptions the semantic contribution of the two lexical items in the lexical analysis would be (104). load(…) is the contribution of the verb that would be assumed in phrasal analyses.

(104) a. load (onto): C<sup>1</sup> ∧ load(…) b. load (with): C<sup>2</sup> ∧ load(…)

(104) shows that the lexical items partly share their semantic contribution. We hence predict that the processing of the dispreferred argument realization of *load* is simpler than the dispreferred meaning of *set*: in the latter case a completely new verb has to be activated while in the first case parts of the meaning are activated already.<sup>50</sup>

Goldberg (1995: 107) argues against lexical rule-based approaches for locative alternations like (105), since according to her such approaches have to assume that one of the verb forms has to be the more basic form.

<sup>50</sup>See also Croft (2003: 64–65) for a brief rejection of Goldberg's interpretation of the experiment that corresponds to what is said here.

	- b. He loaded the wagon with hay.

She remarks that this is problematic since we do not have clear intuitions about what the basic and what the derived forms are. She argues that the advantage of phrasal approaches is that various constructions can be related to each other without requiring the assumption that one of the constructions is more basic than the other. There are two phrasal patterns and the verb is used in one of the two patterns. This criticism can be addressed in two ways: first one could introduce two lexical types (for instance *onto-verb* and *with-verb*) into a type hierarchy. The two types correspond to two valence frames that are needed for the analysis of (105a) and (105b). These types can have a common supertype (*onto-with-verb*) which is relevant for all *spray*/*load* verbs. One of the subtypes or the respective lexical item of the verb is the preferred one. This corresponds to a disjunction in the lexicon, while the phrasal approach assumes a disjunction in the set of phrasal constructions.

A variant of this approach is to assume that the lexical description of *load* just contains the supertype describing all *spray*/*load* verbs. Since model theoretic approaches assume that all structures that are models of utterances contain only maximally specific types (see for instance King (1999) and Pollard & Sag (1994: 21)), it is sufficient to say about verbs like *load* that they are of type *onto-with-verb*. As this type has exactly two subtypes, *load* has to be either *onto-verb* or *with-verb* in an actual model.<sup>51</sup>

A second option is to stick with lexical rules and to assume a single representation for the root of a verb that is listed in the lexicon. In addition, one assumes two lexical rules that map this basic lexical item onto other items that can be used in syntax after being inflected. The two lexical rules can be described by types that are part of a type hierarchy and that have a common supertype. This would capture commonalities between the lexical rules. We therefore have the same situation as with phrasal constructions (two lexical rules vs. two phrasal constructions). The only difference is that the action is one level deeper in the lexical approach, namely in the lexicon (Müller 2002a: 405–406).

The argumentation with regard to the processing of resultative constructions like (106c) is parallel:

	- b. He drinks the milk.
	- c. He drinks the pub empty.

When humans parse a sentence they build up structure incrementally. If one hears a word that is incompatible with the current hypothesis, the parsing process breaks down or the current hypothesis is revised. In (106c) *the pub* does not correspond to the normal transitive use of *drink*, so the respective hypothesis has to be revised. In the phrasal approach the resultative construction would have to be used instead of the transitive construction. In the lexical analysis the lexical item that is licensed by the resultative

<sup>51</sup>This analysis does not allow the specification of verb specific preferences for one of the realization patterns since the lexicon contains the general type only.

lexical rule would have to be used rather than the bivalent one. Building syntactic structure and lexicon access in general place different demands on our processing capacities. However, when (106c) is parsed, the lexical items for *drink* are active already, we only have to use a different one. It is currently unclear to us whether psycholinguistic experiments can differentiate between the two approaches, but it seems to be unlikely.

# **21.7.2 Light verbs**

Wittenberg, Jackendoff, Kuperberg, Paczynski, Snedeker & Wiese (2014) report on a number of experiments that test predictions made by various approaches to light verb constructions. (107a) shows a typical light verb construction: *take* is a light verb that is combined with the nominal that provides the main predication.

	- b. walk to the park

Wittenberg & Piñango (2011) examined two psychologically plausible theories of light verb constructions. The phrasal approach assumes that light verb constructions are stored objects associated with semantics (Goldberg 2003b). The alternative compositional view assumes that the semantics is computed as a fusion of the semantics of the event noun and the semantics of the light verb (Grimshaw 1997, Butt 2003, Jackendoff 2002, Culicover & Jackendoff 2005, Müller 2010b, Beavers et al. 2008). Since light verb constructions are extremely frequent (Piñango, Mack & Jackendoff 2006; Wittenberg & Piñango 2011: 399), the phrasal approaches assuming that light verb constructions are stored items with the object and verb fixed predict that light verb constructions should be retrievable faster than non-light verb constructions like (108) (Wittenberg & Piñango 2011: 396).

(108) take a frisbee to the park

This is not the case. As Wittenberg and Piñango found, there is no difference in processing at the licensing condition (the noun in VO languages like English and the verb in OV languages like German).

However, Wittenberg & Piñango (2011) found an increased processing load 300ms *after* the light verb construction is processed. The authors explain this by assuming that semantic integration of the noun with the verbal meaning takes place after the syntactic combination. While the syntactic combination is rather fast, the semantic computation takes additional resources and this is measurable at 300ms. The verb contributes aspectual information and integrates the meaning of the nominal element. The semantic roles are fused. The resource consumption effect would not be expected if the complete light verb construction were a stored item that is retrieved together with the complete meaning (p. 404). We can conclude that Wittenberg and Piñango's results are compatible with the lexical proposal, but are incompatible with the phrasal view.

# **21.7.3 Arguments from neurolinguistics**

Pulvermüller, Cappelle & Shtyrov (2013) discuss neurolinguistic facts and relate them to the CxG view of grammar theory. One important finding is that deviant words (lexical items) cause brain responses that differ in polarity from brain responses on incorrect strings of words, that is, syntactic combinations. This suggests that there is indeed an empirical basis for deciding the issue.

Concerning the standard example of the Caused-Motion Construction in (109) the authors write the following:

(109) She sneezed the foam off the cappuccino.<sup>52</sup>

this constellation of brain activities may initially lead to the co-activation of the verb *sneeze* with the DCNAs for *blow* and thus to the sentence mentioned. Ultimately, such co-activation of a one-place verb and DCNAs associated with other verbs may result in the former one-place verb being subsumed into a three-place verb category and DCNA set, a process which arguably has been accomplished for the verb *laugh* as used in the sequence *laugh NP off the stage*. (Pulvermüller, Cappelle & Shtyrov 2013)

A DCNA is a discrete combinatorial neuronal assembly. Regarding the specifics of DC-NAs the authors write that

Apart from linking categories together, typical DCNAs establish a temporal order between the category members they bind to. DCNAs that do not impose temporal order (thus acting, in principle, as AND units for two constituents) are thought to join together constituents whose sequential order is free or allow for scrambling. (Pulvermüller, Cappelle & Shtyrov 2013: 404)

I believe that this view is entirely compatible with the lexical view outlined above: the lexical item or DCNA requires certain arguments to be present. A lexical rule that relates an intransitive verb to one that can be used in the Caused-Motion Construction is an explicit representation of what it means to activate the valence frame of *blow*.

The authors cite earlier work (Cappelle, Shtyrov & Pulvermüller 2010) and argue that particle verbs are lexical objects, admitting for a discontinuous realization despite their lexical status (p. 21). They restrict their claim to frequently occurring particle verbs. This claim is of course compatible with our assumptions here, but the differences in brain behavior are interesting when it comes to fully productive uses of particle verbs. For instance any semantically appropriate monovalent verb in German can be combined with the aspectual particle *los*: *lostanzen* 'start to dance', *loslachen* 'start to laugh', *lossingen* 'start to sing', …. Similarly, the combination of monovalent verbs with the particle *an* with the reading *directed-towards* is also productive: *anfahren* 'drive towards', *anlachen* 'laugh in the direction of', *ansegeln* 'sail towards', … (see Stiebels (1996) on various productive patterns). The interesting question is how particle verbs behave that follow these patterns but occur with low frequency. This is still an open question as far as the experimental evidence is concerned, but as I argue below lexical proposals to particle verbs as the one suggested by Müller (2003c) are compatible with both possible outcomes.

Summarizing the discussion so far, lexical approaches are compatible with the accumulated neurobiological evidence and as far as particle verbs are concerned they seem to be better suited than the phrasal proposals by Booij (2002: Section 2) and Blom (2005)

<sup>52</sup>Goldberg (2006: 42).

#### 21 Phrasal vs. lexical analyses

(See Section 21.5.1 for discussion). However, in general, it remains an open question what it means to be a discontinuous lexical item. The idea of discontinuous words is pretty old (Wells 1947), but there have not been many formal accounts of this idea. Nunberg, Sag & Wasow (1994) suggest a representation in a linearization-based framework of the kind that was proposed by Reape (1994) and Kathol (1995: 244–248) and Crysmann (2002) worked out such analyses in detail. Kathol's lexical item for *aufwachen* 'to wake up' is given in (110):

$$
\begin{bmatrix}
\text{(11)} & \text{aufwachen (following Kathol 1995: 246):} \\
& \begin{bmatrix}
\dots \text{|HEAD } \boxed{\Box} \text{ } \text{verb} \\
\dots \text{|VCOMP } \langle \rangle
\end{bmatrix} \\
& \begin{bmatrix}
\langle \ \text{wachen } \rangle \\
\dots \text{|HEAD } \boxed{\Box} \\
\dots \text{|VCOMP } \langle \boxed{\Box} \rangle
\end{bmatrix}
\end{bmatrix} \bigodot \left\langle \begin{bmatrix} \text{vc} \\ \text{auf} \end{bmatrix} \right\rangle \\
& \begin{bmatrix}
\langle \ \text{wch} \overleftarrow{\Box} \rangle \\
\dots \text{|VCOMP } \boxed{\Box} \end{bmatrix} \big| \begin{bmatrix}
\text{vc} \\
\text{vm} \text{5EM} \\
\text{vLIP } - \text{)}
\end{bmatrix} \Big| \right\rangle \\
\end{bmatrix}
$$

The lexical representation contains the list-valued feature dom that contains a description of the main verb and the particle (see Section 11.7.2.2 for details). The dom list is a list that contains the dependents of a head. The dependents can be ordered in any order provided no linearization rule is violated (Reape 1994). The dependency between the particle and the main verb was characterized by the value of the vcomp feature, which is a valence feature for the selection of arguments that form a complex predicate with their head. The shuffle operator ⃝ concatenates two lists without specifying an order between the elements of the two lists, that is, both *wachen*, *auf* and *auf*, *wachen* are possible. The little marking *vc* is an assignment to a topological field in the clause.

I criticized such linearization-based proposals since it is unclear how analyses that claim that the particle is just linearized in the domain of its verb can account for sentences like (111), in which complex syntactic structures are involved (Müller 2007b). German is a V2 language and the fronting of a constituent into the position before the finite verb is usually described as some sort of nonlocal dependency; that is, even authors who favor linearization-based analyses do not assume that the initial position is filled by simple reordering of material (Kathol 2000, Müller 1999b, 2002a, Bjerre 2006).<sup>53</sup>

(111) a. [vf [mf Den the Atem] breath [vc an]] part hielt held die the ganze whole Judenheit.<sup>54</sup> Jewish.community 'The whole Jewish community held their breath.'

<sup>53</sup>Kathol (1995: Section 6.3) working in HPSG suggested such an analysis for simple sentences, but later changed his view. Wetta (2011) also working in HPSG assumes a purely linearization-based approach. Similarly Groß & Osborne (2009) working in Dependency Grammar assume that there is a simple dependency structure in simple sentences while there are special mechanisms to account for extraction out of embedded clauses. I argue against such proposals in Müller (2023a) referring to the scope of adjuncts, coordination of simple with complex sentences and Across the Board Extraction and apparent multiple frontings. See also Section 11.7.1.

<sup>54</sup>Lion Feuchtwanger, *Jud Süß*, p. 276, quoted from Grubačić (1965: 56).


The conclusion that has to be drawn from examples like (111) is that particles interact in complex ways with the syntax of sentences. This is captured by the lexical treatment that was suggested in Müller (2002a: Chapter 6) and Müller (2003c): the main verb selects for the verbal particle. By assuming that *wachen* selects for *auf*, the tight connection between verb and particle is represented.<sup>57</sup> Such a lexical analysis provides an easy way to account for fully nontransparent particle verbs like *an-fangen* 'to begin'. However, I also argued for a lexical treatment of transparent particle verbs like *losfahren* 'to start to drive' and *jemanden/etwas anfahren* 'drive directed towards somebody/something'. The analysis involves a lexical rule that licenses a verbal item selecting for an adjunct particle. The particles *an* and *los* can modify verbs and contribute arguments (in the case of *an*) and the particle semantics. This analysis can be shown to be compatible with the neuro-mechanical findings: if it is the case that even transparent particle verb combinations with low frequency are stored, then the rather general lexical rule that I suggested in the works cited above is the generalization of the relation between a large amount of lexical particle verb items and their respective main verb. The individual particle verbs would be special instantiations that have the form of the particle specified as it is also the case for non-transparent particle verbs like *anfangen*. If it should turn out that productive combinations with particle verbs of low frequency cause syntactic reflexes in the brain, this could be explained as well: the lexical rule licenses an item that selects for an adverbial element. This selection would then be seen as parallel to the relation between the determiner and the noun in the NP *der Mut* 'the courage', which Cappelle et al. (2010: 191) discuss as an example of a syntactic combination. Note that this analysis is also compatible with another observation made by Shtyrov, Pihko & Pulvermüller (2005): morphological affixes also cause the lexical reflexes. In my analysis the stem of the main verb is related to another stem that selects for a particle. This stem can be combined with (derivational and inflectional) morphological affixes causing the lexical activation pattern in the brain. After this combination the verb is combined with the particle and the dependency can be either a lexical or a syntactic one, depending on the results of the experiments to be carried out. The analysis is compatible with both results.

<sup>55</sup>taz, bremen, 24.05.2004, p. 21.

<sup>56</sup>taz, 01.03.2002, p. 8.

<sup>57</sup>Cappelle et al. (2010: 197) write: "the results provide neurophysiological evidence that phrasal verbs are lexical items. Indeed, the increased activation that we found for existing phrasal verbs, as compared to infelicitous combinations, suggests that a verb and its particle together form one single lexical representation, i. e. a single lexeme, and that a unified cortical memory circuit exists for it, similar to that encoding a single word". I believe that my analysis is compatible with this statement.

Note that my analysis allows the principle of lexical integrity to be maintained. I therefore do not follow Cappelle, Shtyrov & Pulvermüller (2010: 198), who claim that they "provide proof that potentially separable multi-word items can nonetheless be word-like themselves, and thus against the validity of a once well-established linguistic principle, the Lexical Integrity Principle". I agree that non-transparent particle verbs are multiword lexemes, but the existence of multi-word lexemes does not show that syntax has access to the word-internal morphological structure. The parallel between particle verbs and clearly phrasal idioms was discussed in Müller (2002a,c) and it was concluded that idiom-status is irrelevant for the question of wordhood. Since the interaction of clearly phrasal idioms with derivational morphology as evidenced by examples like (112) did not force grammarians to give up on lexical integrity, it can be argued that particle verbs are not convincing evidence for giving up the Lexical Integrity Principle either.<sup>58</sup>

	- b. "Heath Heath Ledger" Ledger kann can ich I nicht not einmal even schreiben, write ohne without dass that mir me sein his ins in.the Gras-Gebeiße grass.biting wieder again so so wahnsinnig crazy leid sorrow tut<sup>59</sup> does 'I cannot even write "Heath Ledger" without being sad again about his biting the dust.'

The example in (112b) involves the discontinuous derivation with the circumfix *Ge*- -*e* (Lüdeling 2001: Section 3.4.3; Müller 2002a: 324–327, 372–377; Müller 2003c: Section 2.2.1, Section 5.2.1). Still the parts of the idiom *ins Gras beiß-* 'bite the dust' are present and with them the idiomatic reading. See Sag (2007) for a lexical analysis of idioms that can explain examples like (112).

So, while I think that it is impossible to distinguish phrasal and lexical approaches for phenomena where heads are used with different valence patterns (Section 21.7.1), there seem to be ways to test whether patterns with high frequency and strong collocations should be analyzed as one fixed chunk of material with a fixed form and a fixed meaning or whether they should be analyzed compositionally.

# **21.8 Arguments from statistical distribution**

In this section, we want to look at arguments from statistics that have been claimed to support a phrasal view. We first look at data-oriented parsing, a technique that was successfully used by Bod (2009b) to model language acquisition and then we turn to the collostructional analysis by Stefanowitsch & Gries (2009). Lastly, we argue that these

<sup>58</sup>However, see Booij (2009) for some challenges to lexical integrity.

<sup>59</sup>http://www.coffee2watch.at/egala. 2012-03-23

distributional analyses cannot decide the question whether argument structure constructions are phrasal or lexical.

### **21.8.1 Unsupervised Data-Oriented Parsing**

In Section 13.8.3, we saw Bod's approach to the structuring of natural language utterances (2009b). If one assumes that language is acquired from the input without innate knowledge, the structures that Bod extracts from the distribution of words would have to be the ones that children also learn (parts of speech, meaning, and context would also have to be included). These structures would then also have to be the ones assumed in linguistic theories. Since Bod does not have enough data, he carried out experiments under the assumption of binary-branching trees and, for this reason, it is not possible to draw any conclusions from his work about whether rules license flat or binary-branching structures. There will almost certainly be interesting answers to this question in the future. What can certainly not be determined in a distribution-based analysis is the exact node in the tree where meaning is introduced. Bod (2009a: 132) claims that his approach constitutes "a testable realization of CxG" in the Goldbergian sense, but the trees that he can construct do not help us to decide between phrasal or lexical analyses or analyses with empty heads. These alternative analyses are represented in Figure 21.19 on the following page.<sup>60</sup> The first figure stands for a complex construction that contributes the meaning as a whole. The second figure corresponds to the analysis with a lexical rule and the third corresponds to the analysis with an empty head. A distributional analysis cannot decide between these theoretical proposals. Distribution is computed with reference to words; what the words actually mean is not taken into account. As such, it is only possible to say that the word *fischt* 'fishes' occurs in a particular utterance, however it is not possible to see if this word contains resultative semantics or not. Similarly, a distributional analysis does not help to distinguish between theoretical analyses with or without a lexical head. The empty head is not perceptible in the signal. It is a theoretical construct and, as we have seen in Section 19.5, it is possible to translate an analysis using

<sup>60</sup>The discussion is perhaps easier to follow if one assumes flat structures rather than binary-branching ones.

The first figure corresponds to the Goldbergian view of phrasal constructions where the verb is inserted into the construction and the meaning is present at the topmost node. In the second figure, there is a lexical rule that provides the resultative semantics and the corresponding valence information. In the third analysis, there is an empty head that combines with the verb and has ultimately the same effect as the lexical rule.

Figure 21.19: Three possible analyses for resultative construction: holistic construction, lexical rule, empty head

an empty head into one with a lexical rule. For the present example, any argumentation for a particular analysis will be purely theory-internal.

Although Unsupervised Data-Oriented Parsing (U-DOP) cannot help us to decide between analyses, there are areas of grammar for which these structures are of interest: under the assumption of binary-branching structures, there are different branching possibilities depending on whether one assumes an analysis with verb movement or not. This means that although one does not see an empty element in the input, there is a reflex in statistically-derived trees. The left tree in Figure 21.20 shows a structure that one would expect from an analysis following Steedman (2000: 159), see Section 8.3. The tree on the right shows a structure that would be expected from a GB-type verb movement analysis (see Section 3.2). But at present, there is no clear finding in this regard (Bod, p. c.

Figure 21.20: Structures corresponding to analyses with or without verb movement

2009). There is a great deal of variance in the U-DOP trees. The structure assigned to an utterance depends on the verb (Bod, referring to the Wall Street Journal). Here, it would be interesting to see if this changes with a larger data sample. In any case, it would be interesting to look at how all verbs as well as particular verb classes behave. The U-DOP procedure applies to trees containing at least one word each. If one makes use of parts of speech in addition, this results in structures that correspond to the ones we have seen in the preceding chapters. Sub-trees would then not have two Xs as their daughters but rather NP and V, for example. It is also possible to do statistic work with this kind of subtrees and use the part of speech symbols of words (the preterminal symbols) rather than the words themselves in the computation. For example, one would get trees for the symbol V instead of many trees for specific verbs. So instead of having three different trees for *küssen* 'kiss', *kennen* 'know' and *sehen* 'see', one would have three identical trees for the part of speech "verb" that corresponds to the trees that are needed for transitive verbs. The probability of the V tree is therefore higher than the probabilities of the trees for the individual verbs. Hence one would have a better set of data to compute structures for utterances such as those in Figure 21.20. I believe that there are further results in this area to be found in the years to come.

Concluding this subsection, we contend that Bod's paper is a milestone in the Poverty of the Stimulus debate, but it does not and cannot show that a particular version of constructionist theories, namely the phrasal one, is correct.

### **21.8.2 Collostructions**

Stefanowitsch & Gries (2009: Section 5) assume a plugging analysis: "words occur in (slots provided by) a given construction if their meaning matches that of the construction". The authors claim that their *collostructional analysis has confirmed* [*the plugging analysis*] *from various perspectives*. Stefanowitsch and Gries are able to show that certain verbs occur more often than not in particular constructions, while other verbs never occur in the respective constructions. For instance, *give*, *tell*, *send*, *offer* and *show* are attracted by the Ditransitive Construction, while *make* and *do* are repelled by this construction, that is they occur significantly less often in this construction than what would be expected given the overall frequency of verbs in the corpus. Regarding this distribution the authors write:

These results are typical for collexeme analysis in that they show two things. First, there are indeed significant associations between lexical items and grammatical structures. Second, these associations provide clear evidence for semantic coherence: the strongly attracted collexemes all involve a notion of 'transfer', either literally or metaphorically, which is the meaning typically posited for the ditransitive. This kind of result is typical enough to warrant a general claim that collostructional analysis can in fact be used to identify the meaning of a grammatical construction in the first place. (Stefanowitsch & Gries 2009: 943)

We hope that the preceding discussion has made clear that the distribution of words in a corpus cannot be seen as evidence for a phrasal analysis. The corpus study shows that

*give* usually is used with three arguments in a certain pattern that is typical for English (Subject Verb Object1 Object2) and that this verb forms a cluster with other verbs that have a transfer component in their meaning. The corpus data do not show whether this meaning is contributed by a phrasal pattern or by lexical entries that are used in a certain configuration.

# **21.9 Conclusion**

The essence of the lexical view is that a verb is stored with a valence structure indicating how it combines semantically and syntactically with its dependents. Crucially, that structure is abstracted from the actual syntactic context of particular tokens of the verb. Once abstracted, that valence structure can meet other fates besides licensing the phrasal structure that it most directly encodes: it can undergo lexical rules that manipulate that structure in systematic ways; it can be composed with the valence structure of another predicate; it can be coordinated with similar verbs; and so on. Such an abstraction allows for simple explanations of a wide range of robust, complex linguistic phenomena. We have surveyed the arguments against the lexical valence approach and in favor of a phrasal representation instead. We find the case for a phrasal representation of argument structure to be unconvincing: there are no compelling arguments in favor of such approaches, and they introduce a number of problems:


Assuming a lexical valence structure allows us to solve all the problems that arise with phrasal approaches.

# **21.10 Why (phrasal) constructions?**

In previous sections, I have argued against assuming too much phrasality in grammatical descriptions. If one wishes to avoid transformations in order to derive alternative patterns from a single base structure, while still maintaining lexical integrity, then phrasal analyses become untenable for analyzing all those phenomena where changes in valence and derivational morphology interact. There are, however, some areas in which these

two do not interact. In these cases, there is mostly a choice between analyses with silent heads and those with phrasal constructions. In this section, I will discuss some of these cases.

### **21.10.1 Verbless directives**

Jacobs (2008) showed that there are linguistic phenomena where it does not make sense to assume that there is a head in a particular group of words. These configurations are best described as phrasal constructions, in which the adjacency of particular constituents leads to a complete meaning that goes beyond the sum of its parts. Examples of the phenomena that are discussed by Jacobs are phrasal templates such as those in (113) and verbless directives as in (118):

	- a. Wozu why Konstruktionen? constructions 'Why constructions?'
	- b. Warum why ich? I 'Why me?'

In (113), we are dealing with abbreviated questions:

(115) a. Wozu to.what braucht needs man one Konstruktionen? constructions / Wozu to.what sollte should man one Konstruktionen constructions annehmen? assume 'Why do we need constructions?' / 'Why should we assume constructions?' b. Warum why soll should ich I das that machen? do / Warum why wurde was ich I ausgewählt? chosen / Warum why passiert happens mir me sowas? something.like.that 'Why should I do that?' / 'Why was I chosen?' / 'Why do things like that happen to me?'

In (114), a participle has been omitted:

(116) Den the Hut hat.acc in in der the Hand hand haltend holding kam came er he ins in.the Zimmer. room 'He came into the room hat in hand.'

Cases such as (114) can be analyzed with an empty head that corresponds to *haltend* 'holding'. For (113), on the other hand, one would require either a syntactic structure with multiple empty elements, or an empty head that selects both parts of the construction and contributes the components of meaning that are present in (115). If one adopts the first approach with multiple silent elements, then one would have to explain why these elements cannot occur in other constructions. For example, it would be necessary to assume an empty element corresponding to *man* 'one'/'you'. But such an empty element could never occur in embedded clauses since subjects cannot simply be omitted there:

(117) \* weil because dieses this Buch book gerne gladly liest reads Intended: 'because he/she/it likes to read this book'

If one were to follow the second approach, one would be forced to assume an empty head with particularly odd semantics.

The directives in (118) and (119) are similarly problematic (see also Jackendoff & Pinker (2005: 220) for parallel examples in English):


Here, it is also not possible to simply identify an elided verb. It is, of course, possible to assume an empty head that selects an adverb or a *mit*-PP, but this would be *ad hoc*. Alternatively, it would be possible to assume that adverbs in (118) select the *mit*-PP. Here, one would have to disregard the fact that adverbs do not normally take any arguments. The same is true of Jacobs's examples in (119). For these, one would have to assume that *in* and *zur* 'to the' are the respective heads. Each of the prepositions would then have to select a noun phrase and a *mit*-PP. While this is technically possible, it is as unattractive as the multiple lexical entries that Categorial Grammar has to assume for pied-piping constructions (see Section 8.6).

A considerably more complicated analysis has been proposed by G. Müller (2009a). Müller treats verbless directives as antipassive constructions. Antipassive constructions involve either the complete suppression of the direct object or its realization as an oblique element (PP). There can also be morphological marking on the verb. The subject is normally not affected by the antipassive but can, however, receive a different case in ergative case systems due to changes in the realization of the object. According to G. Müller, there is a relation between (120a) and (120b) that is similar to active-passive pairs:

	- b. In in den the Müll trash mit with diesen these Klamotten! clothes 'Throw these clothes into the trash!'

An empty passive morpheme absorbs the capability of the verb to assign accusative (see also Section 3.4 on the analysis of the passive in GB theory). The object therefore has to be realized as a PP or not at all. It follows from Burzio's Generalization that as the accusative object has been suppressed, there cannot be an external argument. G. Müller assumes, like proponents of Distributed Morphology (e.g., Marantz 1997), that lexical entries are inserted into complete trees post-syntactically. The antipassive morpheme creates a feature bundle in the relevant tree node that is not compatible with German verbs such as *schmeißen* 'throw' and this is why only a null verb with the corresponding specifications can be inserted. Movement of the directional PP is triggered by mechanisms that cannot be discussed further here. The antipassive morpheme forces an obligatory reordering of the verb in initial position (to C, see Section 3.2 and Section 4.2). By stipulation, filling the prefield is only possible in sentences where the C position is filled by a visible verb and this is why G. Müller's analysis does only derive V1 clauses. These are interpreted as imperatives or polar questions. Figure 21.21 on the following page gives the analysis of (120b). Budde (2010) and Maché (2010) note that the discussion of the data has neglected the fact that there are also interrogative variants of the construction:

	- b. Wohin where.to mit with dem the ganzen entire Geld? money 'Where should all this money go?'

Since these questions correspond to V2 sentences, one does not require the constraint that the prefield can only be filled if the C position is filled.

One major advantage of this analysis is that it derives the different sentence types that are possible with this kind of construction: the V1-variants correspond to polar questions and imperatives, and the V2-variants with a question word correspond to *wh*-

Figure 21.21: *In den Müll mit diesen Klamotten* 'in the trash with these clothes' as an antipassive following G. Müller (2009a)

questions. A further consequence of the approach pointed out by G. Müller is that no further explanation is required for other interactions with the grammar. For example, the way in which the constructions interact with adverbs follows from the analysis:



Nevertheless one should still bear the price of this analysis in mind: it assumes an empty antipassive morpheme that is otherwise not needed in German. It would only be used in constructions of the kind discussed here. This morpheme is not compatible with any verb and it also triggers obligatory verb movement, which is something that is not known from any other morpheme used to form verb diatheses.

The costs of this analysis are, of course, less severe if one assumes that humans already have this antipassive morpheme anyway, that is, this morpheme is part of our innate Universal Grammar. But if one follows the argumentation from the earlier sections of this chapter, then one should only assume innate linguistic knowledge if there is no alternative explanation.

G. Müller's analysis can be translated into HPSG. The result is given in (124):

(124) *verb-initial-lr* rels *imperative-or-interrogative* event 2 <sup>⊕</sup> lex-dtr phon ⟨⟩ ss|loc cat head|mod *none* spr ⟨⟩ comps <sup>D</sup> XP[mod … ind <sup>1</sup> ], (PP[*mit*] <sup>1</sup> ) E cont ind 2 rels \* *directive* event 2 patient 1 + 

 (124) contains a lexical entry for an empty verb in verb-initial position. *directive*′ is a placeholder for a more general relation that should be viewed as a supertype of all possible meanings of this construction. These subsume both *schmeißen* 'to throw' and cases such as (125) that were pointed out to me by Monika Budde:

(125) Und and mit with dem the Klavier piano ganz very langsam slowly durch through die the Tür! door 'Carry the piano very slowly through the door!'

Since only verb-initial and verb-second orders are possible in this construction, the application of the lexical rule for verb-initial position (see page 298) is obligatory. This can be achieved by writing the result of the application of this lexical rule into the lexicon, without having the object to which the rule should have applied actually being present in the lexicon itself. Koenig (1999: Section 3.4.2, 5.3) proposed something similar for English *rumored* 'it is rumored that …' and *aggressive*. There is no active variant of the verb *rumored*, a fact that can be captured by the assumption that only the result of applying a passive lexical rule is present in the lexicon. The actual verb or verb stem from which the participle form has been derived exists only as the daughter of a lexical rule but not as an independent linguistic object. Similarly, the verb *\* aggress* only exists as the daughter of a (non-productive) adjective rule that licenses *aggressive* and a nominalization rule licensing *aggression*.

The optionality of the *mit*-PP is signaled by the brackets in (124). If one adds the information inherited from the type *verb-initial-lr* under synsem, then the result is (126).

 The valence properties of the empty verb in (126) are to a large extent determined by the lexical rule for verb-initial order: the V1-LR licenses a verbal head that requires a VP to its right that is missing a verb with the local properties of the lex-dtr ( 3 ).

Semantic information dependent on sentence type (assertion, imperative or question) is determined inside the V1-LR depending on the morphological make-up of the verb and the slash value of the selected VP (see Müller 2007a: Section 10.3; 2015b; 2023a). Setting the semantics to *imperative-or-interrogative* rules out *assertion* as it occurs in V2 clauses. Whether this type is resolved in the direction of *imperative* or *interrogative* is ultimately decided by further properties of the utterance such as intonation or the use of interrogative pronouns.

The valence of the lexical daughters in (126) as well as the connection to the semantic role (the linking to the patient role) are simply stipulated. Every approach has to stipulate that an argument of the verb has to be expressed as a *mit*-PP. Since there is no antipassive in German, the effect that could be otherwise achieved by an antipassive lexical rule in (126) is simply written into the lex-dtr of the verb movement rule.

The comps list of lex-dtr contains a modifier (adverb, directional PP) and the *mit*-PP. This *mit*-PP is co-indexed with the patient of *directive*′ and the modifier refers to the referent of the *mit*-PP. The agent of *directive*′ is unspecified since it depends on the context (speaker, hearer, third person).

This analysis is shown in Figure 21.22 on the next page. Here, V[loc 2 ] corresponds to the lex-dtr in (126). The V1-LR licenses an element that requires a maximal verb

Figure 21.22: HPSG variant of the analysis of *In den Müll mit diesen Klamotten!/?*

projection with that exact dsl value 2 . Since dsl is a head feature, the information is present along the head path. The dsl value is identified with the local value ( 2 in Figure 21.22) in the verb movement trace (see page 299). This ensures that the empty element at the end of sentence has exactly the same local properties that the lex-dtr in (126) has. Thus, both the correct syntactic and semantic information is present on the verb trace and structure building involving the verb trace follows the usual principles. The structures correspond to the structures that were assumed for German sentences in Chapter 9. Therefore, there are the usual possibilities for integrating adjuncts. The correct derivation of the semantics, in particular embedding under imperative or interrogative semantics, follows automatically (for the semantics of adjuncts in conjunction with verb position, see Müller (2007a: Section 9.4)). Also, the ordering variants with the *mit*-PP preceding the direction (125) and direction preceding the *mit*-PP (120b) follow from the usual mechanisms.

If one rejects the analyses discussed up to this point, then one is only really left with phrasal constructions or dominance schemata that connect parts of the construction and contribute the relevant semantics. Exactly how one can integrate adjuncts into the phrasal construction in a non-stipulative way remains an open question; however, there are already some initial results by Jakob Maché (2010) suggesting that directives can still be sensibly integrated into the entire grammar provided an appropriate phrasal schema is assumed.

### **21.10.2 Serial verbs**

There are languages with so-called serial verbs. For example, it is possible to form sentences in Mandarin Chinese where there is only one subject and several verb phrases (Li & Thompson 1981: Chapter 21). There are multiple readings depending on the distribution of aspect marking inside the VP:<sup>61</sup> if the first VP contains a perfect marker, then we have the meaning 'VP1 in order to do/achieve VP2' (127a). If the second VP contains a perfect marker, then the entire construction means 'VP2 because VP1' (127b), and if the first VP contains a durative marker and the verb *hold* or *use*, then the entire construction means 'VP2 using VP1' (127c).

	- b. Ta1 he chi1 eat pu2tao grape tu3 spit le prf Pu2taopi2. grape.skin 'He spat grape skins because he ate grapes.'
	- c. Ta1 he na2 hold zhe dur kuai4zi chopsticks chi1 eat fan4. food 'He eats with chopsticks.'

If we consider the sentences, we only see two adjacent VPs. The meanings of the entire sentences, however, contain parts of meaning that go beyond the meaning of the verb phrases. Depending on the kind of aspect marking, we arrive at different interpretations with regard to the semantic combination of verb phrases. As can be seen in the translations, English sometimes uses conjunctions in order to express relations between clauses or verb phrases.

There are three possible ways to capture these data:


The first approach is unsatisfactory because the meaning does not vary arbitrarily. There are grammaticalized conventions that should be captured by a theory. The second solution has a stipulative character and thus, if one wishes to avoid empty elements, only the third solution remains. Müller & Lipenkova (2009) have presented a corresponding analysis.

# **21.10.3 Relative and interrogative clauses**

Sag (1997) develops a phrasal analysis of English relative clauses as have Ginzburg & Sag (2000) for interrogative clauses. Relative and interrogative clauses consist of a fronted

<sup>61</sup>See Müller & Lipenkova (2009) for a detailed discussion and further references.

phrase and a clause or a verb phrase missing the fronted phrase. The fronted phrase contains a relative or interrogative pronoun.

	- b. the man [who] we know
	- c. the man [whose mother] visited Kim
	- d. a house [in which] to live
	- b. I want to know [why] you did this.

The GB analysis of relative clauses is given in Figure 21.23. In this analysis, an empty head is in the C position and an element from the IP is moved to the specifier position.

Figure 21.23: Analysis of relative clauses in GB theory

The alternative analysis shown in Figure 21.24 involves combining the subparts directly in order to form a relative clause. Borsley (2006) has shown that one would require

whose remarks they seemed to want to object to

Figure 21.24: Analysis of relative clauses in HPSG following Sag (1997)

six empty heads in order to capture the various types of relative clauses possible in English if one wanted to analyze them lexically. These heads can be avoided and replaced by corresponding schemata (see Chapter 19 on empty elements). A parallel argument

can also be found in Webelhuth (2011) for German: grammars of German would also have to assume six empty heads for the relevant types of relative clause.

Unlike the resultative constructions that were already discussed, there is no variability among interrogative and relative clauses with regard to the order of their parts. There are no changes in valence and no interaction with derivational morphology. Thus, nothing speaks against a phrasal analysis. If one wishes to avoid the assumption of empty heads, then one should opt for the analysis of relative clauses by Sag, or the variant in Müller (1999b: Chapter 10; 2007a: Chapter 11). The latter analysis does without a special schema for noun-relative clause combinations since the semantic content of the relative clause is provided by the relative clause schema.

Sag (2010) discusses long-distance dependencies in English that are subsumed under the term *wh*-movement in GB theory and the Minimalist Program. He shows that this is by no means a uniform phenomenon. He investigates *wh*-questions (130), *wh*-exclamatives (131), topicalization (132), *wh*-relative clauses (133) and *the*-clauses (134):

(130) a. How foolish is he?

b. I wonder *how foolish he is*.

	- b. It's amazing *how odd it is*.
	- b. I'm looking for a bank *in which to place my trust*.
	- b. *The more people I met*, the happier I became.

These individual constructions vary in many respects. Sag lists the following questions that have to be answered for each construction:


The variation that exists in this domain has to be captured somehow by a theory of grammar. Sag develops an analysis with multiple schemata that ensure that the category and semantic contribution of the mother node correspond to the properties of both

daughters. The constraints for both classes of constructions and specific constructions are represented in an inheritance hierarchy so that the similarities between the constructions can be accounted for. The analysis can of course also be formulated in a GB-style using empty heads. One would then have to find some way of capturing the generalizations pertaining to the construction. This is possible if one represents the constraints on empty heads in an inheritance hierarchy. Then, the approaches would simply be notational variants of one another. If one wishes to avoid empty elements in the grammar, then the phrasal approach would be preferable.

# **21.10.4 The N-P-N construction**

Jackendoff (2008) discusses the English N-P-N construction. Examples of this construction are given in (135):

	- b. dollar for dollar, student for student, point for point
	- c. face to face, bumper to bumper
	- d. term paper after term paper, picture after picture
	- e. book upon book, argument upon argument

This construction is relatively restricted: articles and plural nouns are not allowed. The phonological content of the first noun has to correspond to that of the second. There are also similar constructions in German:

	- b. Zeile line für for Zeile<sup>62</sup> line 'line by line'

Determining the meaning contribution of this kind of N-P-N construction is by no means trivial. Jackendoff suggests the meaning *many Xs in succession* as an approximation.

Jackendoff points out that this construction is problematic from a syntactic perspective since it is not possible to determine a head in a straightforward way. It is also not clear what the structure of the remaining material is if one is working under assumptions of X theory. If the preposition *um* were the head in (136a), then one would expect that it is combined with an NP, however this is not possible:

	- b. \* Er he hat has ein a Buch book um around ein a Buch book verschlungen. swallowed

<sup>62</sup>*Zwölf Städte*. Einstürzende Neubauten. Fünf auf der nach oben offenen Richterskala, 1987.

For this kind of structures, it would be necessary to assume that a preposition selects a noun to its right and, if it find this, it then requires a second noun of this exact form to its left. For N-*um*-N and N-*für*-N, it is not entirely clear what the entire construction has to do with the individual prepositions. One could also try to develop a lexical analysis for this phenomenon, but the facts are different to those for resultative constructions: in resultative constructions, the semantics of simplex verbs clearly plays a role. Furthermore, unlike with the resultative construction, the order of the component parts of the construction is fixed in the N-P-N construction. It is not possible to extract a noun or place the preposition in front of both nouns. Syntactically, the N-P-N combination with some prepositions behaves like an NP (Jackendoff 2008: 9):

(138) Student after/upon/\*by student flunked.

This is also strange if one wishes to view the preposition as the head of the construction. Instead of a lexical analysis, Jackendoff proposes the following phrasal construction

for N-*after*-N combinations:

(139) Meaning: MANY X s IN SUCCESSION [or however it is encoded] Syntax: [NP N P N ] Phonology: Wd after Wd

The entire meaning as well as the fact that the N-P-N has the syntactic properties of an NP would be captured on the construction level.

I already discussed examples by Bargmann (2015) in Section 11.7.2.4 that show that N-P-N constructions may be extended by further P-N combinations:

(140) Day after day after day went by, but I never found the courage to talk to her.

So rather than an N-P-N pattern Bargmann suggests the pattern in (141), where '+' stands for at least one repetition of a sequence.

(141) N (P N)+

As was pointed out on page 416 this pattern is not easy to cover in selection-based approaches. One could assume that an N takes arbitrarily many P-N combinations, which would be very unusual for heads. Alternatively, one could assume recursion, so N would be combined with a P and with an N-P-N to yield N-P-N-P-N. But such an analysis would make it really difficult to enforce the restrictions regarding the identity of the nouns in the complete construction. In order to enforce such an identity the N that is combined with N-P-N would have to impose constraints regarding deeply embedded nouns inside the embedded N-P-N object (see also Section 18.2).

G. Müller (2011) proposes a lexical analysis of the N-P-N construction. He assumes that prepositions can have a feature redup. In the analysis of *Buch um Buch* 'book after book', the preposition is combined with the right noun *um Buch*. In the phonological component, reduplication of *Buch* is triggered by the redup feature, thereby yielding *Buch um Buch*. This analysis also suffers from the problems pointed out by Jackendoff: in order to derive the semantics of the construction, the semantics would have to be present in the lexical entry of the reduplicating preposition (or in a relevant subsequent component that interprets the syntax). Furthermore it is unclear how a reduplication analysis would deal with the Bargmann data.

# **Further reading**

This chapter is a reflection of one of the most controversial discussions within Construction Grammar (Croft 2003) and among proponents of Construction Grammar and other frameworks. Goldberg's (1995) phrasal approach to argument structure constructions is very influential and many researchers also from other frameworks tried to incorporate her insights into their approaches (for example Asudeh et al. 2008, 2014, Lichte & Kallmeyer 2017). The chapter is an extended and updated version of Müller & Wechsler (2014a,b). Müller & Wechsler (2014a) is a target article and the replies in this volume may be interesting for the reader and may help to get an idea of the complexity of the discussion. This chapter also contains parts of the discussion of constructional approaches suggested in LFG (Asudeh, Dalrymple & Toivonen 2008, 2014) that were taken from a book on the phrasal approach to benefactives in English Müller (2018a).

# **22 Structure, potential structure and underspecification**

The previous chapter extensively dealt with the question whether one should adopt a phrasal or a lexical analysis of valence alternations. This rather brief chapter deals with a related issue. I discuss the analysis of complex predicates consisting of a preverb and a light verb. Preverbs often have an argument structure of their own. They describe an event and the light verb can be used to realize either the full number of arguments or a reduced set of arguments. (1) provides the example from Hindi discussed by Vaidya, Rambow & Palmer (2019).

	- b. pustak=kii book.f.sg=gen tareef praise.f hu-ii be.part-perf.f.sg/be.pres 'The book got praised.' Lit: 'The praise of the book happened.'

*tareef* 'praise' is a noun that can be combined with the light verb *kar* 'do' to form an active sentence as in (1a) or with the light verb *ho* 'be' to form a passive sentence as in (1b). Similar examples can of course be found in other languages making heavy use of complex predicates (Müller 2010b).

In what follows I compare the analysis of Vaidya et al. (2019) in the framework of Lexicalized TAG with a an HPSG analysis. As the name specifies, LTAG is a lexicalized framework, something that was argued for in the previous chapter. However, TAG is similar to phrasal Construction Grammar in that it makes use of phrasal configurations to represent argument slots. This differs from Categorial Grammar and HPSG since the latter frameworks assume descriptions of arguments (head/functor representations in CG and valence lists in HPSG) rather than structures containing these arguments. So while TAG elementary trees contain actual structure, CG and HPSG lexical items contain potential structure. TAG structures can be taken appart and items can be inserted into the middle of an existing structure but usually the structure is not transformed into something else.<sup>1</sup> This is an interesting difference that becomes crucial when talking about the formation of complex predicates and in particular about certain active/passive alternations.

<sup>1</sup>One way to "delete" parts of the structure would be to assume empty elements that can be inserted into substitution nodes (see Chapter 19 for discussion).

#### 22 Structure, potential structure and underspecification

Vaidya et al. (2019) assume that the structures for the examples in (1) are composed of elementary trees for *tareef* 'praise' and the respective light verbs. This is shown in Figure 22.1 and Figure 22.2 respectively. The TAG analysis is only sketched here. The

Figure 22.1: Analysis of *logon=ne pustak=kii tareef k-ii* 'People praised the book.' The tree of the light verb is adjoined into the tree of the preverb, into the XP<sup>1</sup> position

Figure 22.2: Analysis of *pustak=kii tareef hu-ii* 'The book got praised.'

authors use feature-based TAG, which makes it possible to enforce obligatory adjunction: the elementary tree for *tareef* is specified in a way that makes it necessary to take the tree apart and insert nodes of another tree (see page 432). This way it can be ensured that the preverb has to be augmented by a light verb. This results in XP being inserted at XP<sup>1</sup> in the figures above.

What the analysis clearly shows is that TAG assumes two lexical items for the preverb: one with two arguments for the active case and one with just one argument for the passive. In general one would say that *tareef* is a noun describing the praising event, that is, one person praises another one. Now this noun can be combined with a light verb and depending on which light verb is used we get an active sentence with both arguments realized or a passive sentence with the agent of the eventive noun suppressed. There is no morphological reflex of this active/passive alternation at the noun. It is just the same noun *tareef* : in an active sentence in (1a) and in a passive one in (1b).

And here we see a real difference between the frameworks: TAG is a framework in which structure is assembled: the basic operations are substitution and adjunction. The lexicon consists of ready-made building blocks that are combined to yield the trees we want to have in the end. This differs from Categorial Grammar and HPSG where lexical items do not encode real structure to be used in an analysis, but potential structure: lexical items come with a list of their arguments, that is, items that are required for the lexical element under consideration to project to a full phrase. However, lexical heads may enter relations with their valents and form NPs, APs, VPs, PPs or other phrases, but they do not have to. Geach (1970) developed a technique that is called functional composition or argument composition within the framework of Categorial Grammar and this was transferred to HPSG by Hinrichs & Nakazawa (1989b, 1994a). Since the 90ies this technique is used for the analysis of complex predicates in HPSG for German (Hinrichs & Nakazawa 1989b, 1994a, Kiss 1995, Meurers 1999a, Müller 1999b, Kathol 2000), Romance (Miller & Sag 1997: 600; Monachesi 1998; Abeillé & Godard 2002), Korean (Chung 1998), and Persian (Müller 2010b). See Godard & Samvelian (2021) for an overview. For instance Müller (2010b: 642) analyzes the light verbs *kardan* 'do' and *šodan* 'become' this way: both raise the subject of the embedded predicate and make it their own argument but *kardan* introduces an additional argument while *šodan* does not do so.

Applying the argument composition technique to our example, we get the following lexical item for *tareef* :

(2) Sketch of lexical item for *tareef* 'praise':

 head *noun* subj ⟨ 1 ⟩ comps ⟨ 2 NP ⟩ arg-st ⟨ 1 NP, 2 NP ⟩ 

The arg-st list contains all arguments of a head. The arguments are linked to the semantic representation and are mapped to valence features like specifier and complements. Depending on the langauge and the realizationability of subjects within projections, that subject may be mapped to a separate feature, which is a head feature. head features are projected along the head path but the features contained under head do not license combinations with the head.

The lexical items for *kar* 'do' and *ho* 'be' are:

(3) a. Sketch of lexical item for *kar* 'do': head *verb* arg-st 1 ⊕ 2 ⊕ ⟨ N[subj 1 , comps 2 ] ⟩ 

Figure 22.3: Analysis of *logon=ne pustak=kii tareef k-ii* 'People praised the book.' The arguments of the preverb are taken over by the light verb

b. Sketch of lexical item for *ho* 'be': head *verb* arg-st 1 ⊕ ⟨ N[comps 1 ] ⟩ 

The verb *kar* 'do' selects for a noun and takes whatever the subject of this noun is ( 1 ) and concatenates the list of complements the noun takes ( 2 ) with the value of subj. The result is 1 ⊕ 2 and it is a prefix of the arg-st list of the light verb. The lexical item for *ho* 'be' is similar, the difference being that the subject of the embedded verb is not attracted to the higher arg-st list, only the complements ( 1 ) are.

For finite verbs it is assumed that all arguments are mapped to the comps list of the verb, so the comps list is identical to the arg-st list. The analysis of our example sentences is shown in Figures 22.3 and 22.4.

The conclusion is that HPSG has a representation of potential structure. When light verbs are present, they can take over valents and "execute" them according to their own preferences. This is not possible in TAG since once structure is assembled it cannot be changed. We may insert items into the middle of an already assembled structure but we cannot take out arguments or reorder them. This is possible in Categorial Grammar and in HPSG: the governing head may choose which arguments to take order and in which order they should be represented in the valence repsresentations of the governing head.

LFG is somewhere in the middle between TAG and HPSG: the phrase structural configurations are not fully determined as in TAG since LFG does not store and manipulate phrase markers. But lexical items are associated with f-structures and these f-structures are responsible for which elements are realized in syntax. As complex predicates are assumed to be monoclausal it is not sufficient to embed the f-structure of the preverb

Figure 22.4: Analysis of *pustak=kii tareef hu-ii* 'The book got praised.'

within the f-structure of the light verb (Butt et al. 2003). Since the grammatical functions that are ultimately realized in the clause do not depend on the preverb alone the light verb may have to determine the grammatical functions contributed by the preverb. In order to be able to do this Butt et al. (2003) use the restriction operator (Kaplan & Wedekind 1993), which restricts out certain features or path equations provided by the preverb's and the light verb's f-structures. The statement of grammatical functions in f-structures is another instance of too strict specifications: once specified, it is difficult to get rid of it and special means like partial copying via restriction are needed. An alternative not relying on restriction was suggested by Butt (1997): embedding relations can be specified on the a-structure representation and then a mapping is defined that maps the complex a-structure to the desired f-structure. Mapping between several levels of representation is a general tool that is also used in HPSG: for instance, Bouma, Malouf & Sag (2001) used arg-st, deps, and comps in the treatment of nonlocal dependencies. See also Koenig (1999) on the introduction of arguments via additional auxiliary features. As I showed in Müller (2007a: Section 7.5.2.2), one would need an extra feature for every kind of argument alternation that is to be modeled this way. Recent versions of LFG use glue semantics to keep track of arguments (Dalrymple, Lamping & Saraswat 1993; Dalrymple 2001: Chapter 8). Glue semantics can be used to do argument extension and argument manipulation in general in ways that are parallel to argument attraction approaches. See for instance Asudeh, Dalrymple & Toivonen (2013) for a treatment of benefactive arguments.

Summing up, I showed that there are indeed differences between the frameworks that are due to the basic representational formalisms they assume. While TAG assumes that the lexicon contains trees with a certain structure, HPSG assumes that lexical items come with valence specifications, that is, they have descriptions of items that have to be combined with the lexical item. But the way in which the items have to be combined with the head is determined by dominance schemata (grammar rules) that are separate from the lexical items. So the valence specifications specify possible structures. Since

#### 22 Structure, potential structure and underspecification

valence representations can be composed by superordinate predicates there is enough flexibility to deal with various light verb phenomena. LFG is a bit more constrained due to the use of f-structures, but using a restriction operator unwanted information about grammatical functions can be keept out of f-structures of matrix predicates.

This chapter is based on Müller (2019a).

# **23 Universal Grammar and doing comparative linguistics without an a priori assumption of a (strong) UG**

The following two sections deal with the tools that I believe to be necessary to capture generalizations and the way one can derive such generalizations.

# **23.1 Formal tools for capturing generalizations**

In Chapter 13, it was shown that all the evidence that has previously been brought forward in favor of innate linguistic knowledge is in fact controversial. In some cases, the facts are irrelevant to the discussion and in other cases, they could be explained in other ways. Sometimes, the chains of argumentation are not logically sound or the premises are not supported. In other cases, the argumentation is circular. As a result, the question of whether there is innate linguistic knowledge still remains unanswered. All theories that presuppose the existence of this kind of knowledge are making very strong assumptions. If one assumes, as Kayne (1994) for example, that all languages have the underlying structure [specifier [head complement]] and that movement is exclusively to the left, then these two basic assumptions must be part of innate linguistic knowledge since there is no evidence for the assumption that utterances in all natural languages have the structure that Kayne suggests. As an example, the reader may check Laenzlinger's proposal for German (2004: 224), which is depicted in Figure 4.20 on page 149. According to Laenzlinger, (1a) is derived from the underlying structure in (1b):

	- b. \* weil because der the Mann man wahrscheinlich probably nicht not oft often gut well hat has gespielt played diese this Sonate sonata

(1b) is entirely unacceptable, so the respective structure cannot be acquired from input and hence the principles and rules that license it would have to be innate.

As we have seen, there are a number of alternative theories that are much more surface-oriented than most variants of Transformational Grammar. These alternative theories often differ with regard to particular assumptions that have been discussed in the preceding sections. For example, there are differences in the treatment of long-distance

dependencies that have led to a proliferation of lexical items in Categorial Grammar (see Section 8.6). As has been shown by Jacobs (2008), Jackendoff (2008) and others, approaches such as Categorial Grammar that assume that every phrase must have a functor/head cannot explain certain constructions in a plausible way. Inheritance-based phrasal analyses that only list heads with a core meaning in the lexicon and have the constructions in which the heads occur determine the meaning of a complex expression turn out to have difficulties with derivational morphology and with accounting for alternative ways of argument realization (see Section 21.2.2, 21.4.1, and 21.4.2). We therefore need a theory that handles argument structure changing processes in the lexicon and still has some kind of phrase structure or relevant schemata. Some variants of GB/MP as well as LFG, HPSG, TAG and variants of CxG are examples of this kind of theory. Of these theories, only HPSG and some variants of CxG make use of the same descriptive tools ((typed) feature descriptions) for roots, stems, words, lexical rules and phrases. By using a uniform description for all these objects, it is possible to formulate generalizations over the relevant objects. It is therefore possible to capture what particular words have in common with lexical rules or phrases. For example, the -*bar* 'able' derivation corresponds to a complex passive construction with a modal verb. (2) illustrates.

	- b. Das the Rätsel puzzle kann can gelöst solved werden. be 'The puzzle can be solved.'

By using the same descriptive inventory for syntax and morphology, it is possible to capture cross-linguistic generalizations: something that is inflection/derivation in one language can be syntax in another.

It is possible to formulate principles that hold for both words and phrases and furthermore, it is possible to capture cross-linguistic generalizations or generalizations that hold for certain groups of languages. For example, languages can be divided into those with fixed constituent order and those with more flexible or completely free constituent order. The corresponding types can be represented with their constraints in a type hierarchy. Different languages can use a particular part of the hierarchy and also formulate different constraints for each of the types (see Ackerman & Webelhuth 1998: Section 9.2). HPSG differs from theories such as LFG and TAG in that phrases are not ontologically different from words. This means that there are no special c-structures or tree structures. Descriptions of complex phrases simply have additional features that say something about their daughters. In this way, it is possible to formulate cross-linguistic generalizations about dominance schemata. In LFG, the c-structure rules are normally specified separately for each language. Another advantage of consistent description is that one can capture similarities between words and lexical rules, as well as between words and phrases. For example, a complementizer such as *dass* 'that' shares a number of properties with a simple verb or with coordinated verbs in initial position:

	- b. [Kennt knows und and liebt] loves Maria Maria die the Platte? record 'Does Maria know and love the record?'

The difference between the two linguistic objects mainly lies in the kind of phrase they select: the complementizer requires a sentence with a visible finite verb, whereas the verb in initial position requires a sentence without a visible finite verb.

In Section 9.1.5, a small part of an inheritance hierarchy was presented. This part contains types that probably play a role in the grammars of all natural languages: there are head-argument combinations in every language. Without this kind of combinatorial operation, it would not be possible to establish a relation between two concepts. The ability to create relations, however, is one of the basic properties of language.

In addition to more general types, the type hierarchy of a particular language contains language-specific types or those specific to a particular class of languages. All languages presumably have one and two-place predicates and for most languages (if not all), it makes sense to talk about verbs. It is then possible to talk about one and two-place verbs. Depending on the language, these can then be subdivided into intransitive and transitive. Constraints are formulated for the various types that can either hold generally or be language-specific. In English, verbs have to occur before their complements and therefore have the initial value +, whereas verbs in German have the initial value − and it is the lexical rule for initial position that licenses a verb with an initial value +.

The differing settings of the initial value for German and English is reminiscent of parameters from GB-Theory. There is one crucial difference, however: it is not assumed that a language learner sets the initial value for all heads once and for all. The use of an initial value is compatible with models of acquisition that assume that learners learn individual words with their positional properties. It is certainly possible for the respective words to exhibit different values for a particular feature. Generalizations about the position of entire word classes are only learned at a later point in the acquisition process.

A hierarchy analogous to the one proposed by Croft (see Section 21.4.1) is given in Figure 23.1 on the next page. For inflected words, the relevant roots are in the lexicon. Examples of this are *schlaf* - 'sleep', *lieb*- 'love' and *geb*- 'give'. In Figure 23.1, there are different subtypes of *root*, the general type for roots: e.g., *intrans-verb* for intransitive verbs and *trans-verb* for transitive verbs. Transitive verbs can be further subdivided into strictly transitive verbs (those with nominative and accusative arguments) and ditransitive verbs (those with nominative and both accusative and dative arguments). The hierarchy above would of course have to be refined considerably as there are even further sub-classes for both transitive and intransitive verbs. For example, one can divide intransitive verbs into unaccusative and unergative verbs and even strictly transitive verbs would have to be divided into further sub-classes (see Welke 2009: Section 2).

23 Universal Grammar and comparative linguistics without UG

Figure 23.1: Section of an inheritance hierarchy with lexical entries and dominance schemata

In addition to a type for roots, the above figure contains types for stems and words. Complex stems are complex objects that are derived from simple roots but still have to be inflected (*lesbar*- 'readable', *besing*- 'to sing about'). Words are objects that do not inflect. Examples of these are the pronouns *er* 'he', *sie* 'she' etc. as well as prepositions. An inflected form can be formed from a verbal stem (*geliebt* 'loved', *besingt* 'sings about'). Relations between inflected words and (complex) stems can be formed again using derivation rules. In this way, *geliebt* 'loved' can be recategorized as an adjective stem that must then be combined with adjectival endings (*geliebt-e*). The relevant descriptions of complex stems/words are subtypes of *complex-stem* or *word*. These subtypes describe the form that complex words such as *geliebte* must have. For a technical implementation of this, see Müller (2002a: Section 3.2.7). Using dominance schemata, all words can be combined to form phrases. The hierarchy given here is of course by no means complete. There are a number of additional valence classes and one could also assume more general types that simply describe one, two and three-place predicates. Such types are probably plausible for the description of other languages. Here, we are only dealing with a small part of the type hierarchy in order to have a comparison to the Croftian hierarchy: in Figure 23.1, there are no types for sentence patterns with the form [Sbj IntrVerb], but rather types for lexical objects with a particular valence (V[comps ⟨ NP[*str*] ⟩]). Lexical rules can then be applied to the relevant lexical objects that license objects with another valence or introduce information about inflection. Complete words can be combined in the syntax with relatively general rules, for example in head-argument structures. The problems from which purely phrasal approaches suffer are thereby avoided. Nevertheless generalizations about lexeme classes and the utterances that can be formed can be captured in the hierarchy.

There are also principles in addition to inheritance hierarchies: the Semantics Principle presented in Section 9.1.6 holds for all languages. The Case Principle that we also saw is a constraint that only applies to a particular class of languages, namely nominativeaccusative languages. Other languages have an ergative-absolutive system.

The assumption of innate linguistic knowledge is not necessary for the theory of language sketched here. As the discussion in Section 13 has shown, the question of whether this kind of knowledge exists has still not been answered conclusively. Should it turn out that this knowledge actually exists, the question arises of what exactly is innate. It would be a plausible assumption that the part of the inheritance hierarchy that is relevant for all languages is innate together with the relevant principles (e.g., the constraints on Head-Argument structures and the Semantics Principle). It could, however, also be the case that only a part of the more generally valid types and principles is innate since something being innate does not follow from the fact that it is present in all languages (see also Section 13.1.9).

In sum, one can say that theories that describe linguistic objects using a consistent descriptive inventory and make use of inheritance hierarchies to capture generalizations are the ones best suited to represent similarities between languages. Furthermore, this kind of theory is compatible with both a positive and a negative answer to the question of whether there is innate linguistic knowledge.

# **23.2 How to develop linguistic theories that capture cross-linguistic generalizations**

In the previous section I argued for a uniform representation of linguistic knowledge at all descriptive levels and for type hierarchies as a good tool for representing generalizations. This section explores a way to develop grammars that are motivated by facts from several languages.

If one looks at the current practice in various linguistic schools one finds two extreme ways of approaching language. On the one hand, we have the Mainstream Generative Grammar (MGG) camp and, on the other hand, we have the Construction Grammar/Cognitive Grammar camp. I hasten to say that what I state here does not hold for all members of these groups, but for the extreme cases. The caricature of the MGG scientist is that he is looking for underlying structures. Since these have to be the same for all languages (poverty of the stimulus), it is sufficient to look at one language, say English. The result of this research strategy is that one ends up with models that were suggested by the most influential linguist for English and that others then try to find ways to accommodate other languages. Since English has an NP VP structure, all languages have to have it. Since English reorders constituents in passive sentences, passive is movement and all languages have to work this way. I discussed the respective analyses of German in more detail in Section 3.4.2 and in Chapter 20 and showed that the assumption that

#### 23 Universal Grammar and comparative linguistics without UG

passive is movement makes unwanted predictions for German, since the subject of passives stays in the object position in German. Furthermore, this analysis requires the assumption of invisible expletives, that is, entities that cannot be seen and do not have any meaning.

On the other extreme of the spectrum we find people working in Construction Grammar or without any framework at all (see footnote 1 on page 1 for discussion) who claim that all languages are so different that we cannot even use the same vocabulary to analyze them. Moreover, within languages, we have so many different objects that it is impossible (or too early) to state any generalizations. Again, what I describe here are extreme positions and clichés.

In what follows, I sketch the procedure that we apply in the CoreGram project<sup>1</sup> (Müller 2013b, 2015c). In the CoreGram project we work on a set of typologically diverse languages in parallel:


These languages belong to diverse language families (Indo-European, Afro-Asiatic, Sino-Tibetan) and among the Indo-European languages the languages belong to different groups (Germanic, Romance, Indo-Iranian). Figure 23.2 provides an overview. We work out fully formalized, computer-processable grammar fragments in the framework of HPSG that have a semantics component. The details will not be discussed here, but the interested reader is referred to Müller (2015c).

As was argued in previous sections, the assumption of innate language-specific knowledge should be kept to a minimum. This is also what Chomsky suggested in his Minimalist Program. There may even be no language-specific innate knowledge at all, a view taken in Construction Grammar/Cognitive Grammar. So, instead of imposing constraints from one language onto other languages, a bottom-up approach seems to be

<sup>1</sup> https://hpsg.hu-berlin.de/Projects/CoreGram.html, 20th February 2023.

Figure 23.2: Language families and groups of the languages covered in the CoreGram project

more appropriate: grammars for individual languages should be motivated language-internally. Grammars that share certain properties can be grouped in classes. This makes it possible to capture generalizations about groups of languages and natural language as such. Let us consider a few example languages: German, Dutch, Danish, English and French. If we start developing grammars for German and Dutch, we find that they share a lot of properties: for instance, both are SOV and V2 languages and both have a verbal complex. One main difference is the order of elements in the verbal complex. The situation can be depicted as in Figure 23.3. There are some properties that are shared

Figure 23.3: Shared properties of German and Dutch

between German and Dutch (Set 3). For instance, the argument structure of lexical items, a list containing descriptions of syntactic and semantic properties of arguments and the linking of these arguments to the meaning of the lexical items, is contained in Set 3. In addition to the constraints for SOV languages, the verb position and the fronting of a constituent in V2 clauses are contained in Set 3. The respective constraints are shared between the two grammars. Although these sets are arranged in a hierarchy in Figure 23.3 and the following figures this has nothing to do with the type hierarchies that have been discussed in the previous subsection. These type hierarchies are part of our linguistic theories and various parts of such hierarchies can be in different sets: those parts of the type hierarchy that concern more general aspects can be in Set 3 in Figure 23.3 and

#### 23 Universal Grammar and comparative linguistics without UG

those that are specific to Dutch or German are in the respective other sets. When we add another language, say Danish, we get further differences. While German and Dutch are SOV, Danish is an SVO language. Figure 23.4 shows the resulting situation: the topmost node represents constraints that hold for all the languages considered so far (for instance the argument structure constraints, linking and V2) and the node below it (Set 4) contains constraints that hold for German and Dutch only.<sup>2</sup> For instance, Set 4 contains constraints regarding verbal complexes and SOV order. The union of Set 4 and Set 5 is Set 3 of Figure 23.3.

Figure 23.4: Shared properties of German, Dutch, and Danish

If we add further languages, further constraint sets will be distinguished. Figure 23.5 on the facing page shows the situation that results when we add English and French. Again, the picture is not complete since there are constraints that are shared by Danish and English but not by French, but the general idea should be clear: by systematically working this way, we should arrive at constraint sets that directly correspond to those that have been established in the typological literature.

The interesting question is what will be the topmost set if we consider enough languages. At first glance, one would expect that all languages have valence representations and linkings between these and the semantics of lexical items (argument structure lists in the HPSG framework). However, Koenig & Michelson (2012) argue for an analysis of Oneida (a Northern Iroquoian language) that does not include a representation of syntactic valence. If this analysis is correct, syntactic argument structure would not be universal. It would, of course, be characteristic of a large number of languages, but it would not be part of the topmost set. So this leaves us with just one candidate for the top-

<sup>2</sup> In principle, there could be constraints that hold for Dutch and Danish, but not for German or for German and Danish, but not for Dutch. These constraints would be removed from Set 1 and Set 2, respectively, and inserted into another constraint set higher up in the hierarchy. These sets are not illustrated in the figure and I keep the names Set 1 and Set 2 from Figure 23.3 for the constraint sets for German and Dutch.

Figure 23.5: Languages and language classes

most set from the area of syntax: the constraints that license the combination of two or more linguistic objects. This is basically Chomsky's External Merge without the binarity restriction<sup>3</sup> . In addition, the topmost set would, of course, contain the basic machinery for representing phonology and semantics.

It should be clear from what has been said so far that the goal of every scientist who works this way is to find generalizations and to describe a new language in a way that reuses theoretical constructs that have been found useful for a language that is already covered. However, as was explained above, the resulting grammars should be motivated by data of the respective languages and not by facts from other languages. In situations where more than one analysis would be compatible with a given dataset for language X, the evidence from language Y with similar constructs is most welcome and can be used as evidence in favor of one of the two analyses for language X. I call this approach the *bottom-up approach with cheating*: unless there is contradicting evidence, we can reuse analyses that have been developed for other languages.

Note that this approach is compatible with the rather agnostic view advocated by Haspelmath (2010a), Dryer (1997), Croft (2001: Section 1.4.2–1.4.3), and others, who argue that descriptive categories should be language-specific, that is, the notion of *subject* for Tagalog is different from the one for English, the category *noun* in English is different

<sup>3</sup>Note that binarity is more restrictive than flat structures: there is an additional constraint that there have to be exactly two daughters. As was argued in Section 21.10.4 one needs phrasal constructions with more than two constituents.

#### 23 Universal Grammar and comparative linguistics without UG

from the category *noun* in Persian and so on. Even if one follows such extreme positions, one can still derive generalizations regarding constituent structure, head-argument relations and so on. However, I believe that some categories can fruitfully be used crosslinguistically; if not universally, then at least for language classes. As Newmeyer (2010: 692) notes with regard to the notion of *subject*: calling two items *subject* in one language does not entail that they have identical properties. The same is true for two linguistic items from different languages: calling a Persian linguistic item *subject* does not entail that it has exactly the same properties as an English linguistic item that is called *subject*. The same is, of course, true for all other categories and relations, for instance, parts of speech: Persian nouns do not share all properties with English nouns.<sup>4</sup> Haspelmath (2010c: 697) writes: "Generative linguists try to use as many crosslinguistic categories in the description of individual languages as possible, and this often leads to insurmountable problems." If the assumption of a category results in problems, they have to be solved. If this is not possible with the given set of categories/features, new ones have to be assumed. This is not a drawback of the methodology, quite the opposite is true: if we have found something that does not integrate nicely into what we already have, this is a sign that we have discovered something new and exciting. If we stick to languageparticular categories and features, it is much harder to notice that a special phenomenon is involved, since all categories and features are specific to one language anyway. Note also that not all speakers of a language community have exactly the same categories. If one were to take the idea of language-particular category symbols to an extreme, one would end up with person specific category symbols like *Klaus-English-noun*.

After my talk at the MIT in 2013, members of the linguistics department objected to the approach taken in the CoreGram project and claimed that it would not make any predictions as far as possible/impossible languages are concerned. Regarding predictions two things must be said: firstly, predictions are being made on a language particular basis. As an example consider the following sentences from Netter (1991):

	- b. [Versucht, tried einen a.acc Freund friend vorzustellen], to.introduce hat has er he.nom ihr her.dat noch yet nie. never 'He never before tried to introduce a friend to her.'

<sup>4</sup>Note that using labels like *Persian Noun* and *English Noun* (see for instance Haspelmath 2010a: Section 2 for such a suggestion regarding case, e.g., Russian Dative, Korean Dative, …) is somehow strange since it implies that both Persian nouns and English nouns are somehow nouns. Instead of using the category *Persian Noun* one could assign objects of the respective class to the class *noun* and add a feature language with the value *persian*. This simple trick allows one to assign both objects of the type *Persian Noun* and objects of the type *English Noun* to the class *noun* and still maintain the fact that there are differences. Of course, no theoretical linguist would introduce the language feature to differentiate between Persian and English nouns, but nouns in the respective languages have other features that make them differ. So the part of speech classification as noun is a generalization over nouns in various languages and the categories *Persian Noun* and *English Noun* are feature bundles that contain further, language-specific information.

When I first read these sentences I had no idea about their structure. I switched on my computer and typed them in and within milliseconds I got an analysis of the sentences and by inspecting the result I realized that these sentences are combinations of partial verb phrase fronting and the so-called third construction (Müller 1999b: 439). I had previously implemented analyses of both phenomena but had never thought about the interaction of the two. The grammar predicted that examples like (4) are grammatical. Similarly the constraints of the grammar can interact to rule out certain structures. So predictions about ungrammaticality/impossible structures are in fact made as well.

Secondly, the topmost constraint set holds for all languages seen so far. It can be regarded as a hypothesis about properties that are shared by all languages. This constraint set contains constraints about the connection between syntax and information structure and such constraints allow for V2 languages but rule out languages with the verb in penultimate position (see Kayne 1994: 50 for the claim that such languages do not exist. Kayne develops a complicated syntactic system that predicts this). Of course, if a language is found that places the verb in penultimate position for the encoding of sentence types or some other communicative effect, a more general topmost set has to be defined. But this is parallel for Minimalist theories: if languages are found that are incompatible with basic assumptions, the basic assumptions have to be revised. As with the language particular constraints, the constraints in the topmost set make certain predictions about what can be and what cannot be found in languages.

Frequently discussed examples such as those languages that form questions by reversing the order of the words in a string (Haider 2015: 224; Musso et al. 2003) need not be ruled out by the grammar, since they are ruled out by language external constraints: we simply lack the working memory to do such complex computations.

A variant of this argument comes from David Pesetsky and was raised in Facebook discussions of an article by Paul Ibbotson and Michael Tomasello published in The Guardian<sup>5</sup> . Pesetsky claimed that Tomasello's theory of language acquisition could not explain why we find V2 languages but no V3 languages. First, I do not know of anything that blocks V3 languages in current Minimalist theories. So per se the fact that V3 languages may not exist cannot be used to support any of the competing approaches. Of course, the question could be asked whether the V3 pattern would be useful for reaching our communicative goals and whether it can be easily acquired. Now, with V2 as a pattern it is clear that we have exactly one position that can be used for special purposes in the V2 sentence (topic or focus). For monovalent and bivalent verbs we have an argument that can be placed in initial position. The situation is different for the hypothetical V3 languages, though: If we have monovalent verbs like *sleep*, there is nothing for the second position. As Pesetsky pointed out in the answer to my comment on a blog post, languages solve such problems by using expletives. For instance some languages insert an expletive to mark subject extraction in embedded interrogative sentences, since otherwise the fact that the subject is extracted would not be recognizable by the hearer. So

<sup>5</sup> *The roots of language: What makes us different from other animals?* Published 2015-11-05. http://www.theguardian.com/science/head-quarters/2015/nov/05/roots-language-what-makes-usdifferent-animals, 2018/04/25.

the expletive helps to make the structure transparent. V2 languages also use expletives to fill the initial position if speakers want to avoid something in the special, designated position:

(5) Es expl kamen came drei three Männer man zum to.the Tor gate hinein. in 'Three man came through the gate.'

In order to do the same in V3 languages one would have to put two expletives in front of the verb. So there seem to be many disadvantages of a V3 system that V2 systems do not have and hence one would expect that V3 systems are less likely to come into existence. If they existed, they would be expected to be subject to change in the course of time; e.g., omission of the expletive with intransitives, optional V2 with transitives and finally V2 in general. With the new modeling techniques for language acquisition and agent-based community simulation one can actually simulate such processes and I guess in the years to come, we will see exciting work in this area.

Cinque (1999: 106) suggested a cascade of functional projections to account for reoccurring orderings in the languages of the world. He assumes elaborate tree structures to play a role in the analysis of all sentences in all languages even if there is no evidence for respective morphosyntactic distinctions in a particular language (see also Cinque & Rizzi 2010: 55). In the latter case, Cinque assumes that the respective tree nodes are empty. Cinque's results could be incorporated in the model advocated here. We would define part of speech categories and morpho-syntactic features in the topmost set and state linearization constraints that enforce the order that Cinque encoded directly in his tree structure. In languages in which such categories are not manifested by lexical material, the constraints would never apply. Neither empty elements nor elaborate tree structures would be needed. Thus Cinque's data could be covered in a better way in an HPSG with a rich UG but I, nevertheless, refrain from introducing 400 categories (or features) into the theories of all languages and, again, I point out that such a rich and language-specific UG is implausible from a genetic point of view. Therefore, I wait for other, probably functional, explanations of the Cinque data.

Note also that implicational universals can be derived from hierarchically organized constraint sets as the ones proposed here. For instance, one can derive from Figure 23.5 the implicational statement that all SVO languages are V2 languages, since there is no language that has constraints from Set 4 that does not also have the constraints of Set 7. Of course, this implicational statement is wrong, since there are lots and lots of SOV languages and just exceptionally few V2 languages. So, as soon as we add other languages as for instance Persian or Japanese, the picture will change.

The methodology suggested here differs from what is done in MGG, since MGG stipulates the general constraints that are supposed to hold for all languages on the basis of general specualtions about language. In the best case, these general assumptions are fed by a lot of experience with different languages and grammars, in the worst case they are derived from insights gathered from one or more Indo-European languages. Quite often impressionistic data is used to motivate rather far-reaching fundamental design de-

cisions (Fanselow 2009, Sternefeld & Richter 2012, Haider 2014). It is interesting to note that this is exactly what members of the MGG camp reproach typologists for. Evans & Levinson (2009a) pointed out that counterexamples can be found for many alleged universals. A frequent response to this is that unanalyzed data cannot refute grammatical hypotheses (see, for instance, Freidin 2009: 454). In the very same way it has to be said that unanalyzed data should not be used to build theories on (Fanselow 2009). In the CoreGram project, we aim to develop broad-coverage grammars of several languages, so those constraints that make it to the top node are motivated and not stipulated on the basis of intuitive implicit knowledge about language.

Since it is data-oriented and does not presuppose innate language-specific knowledge, this research strategy is compatible with work carried out in Construction Grammar (see Goldberg 2013b: 481 for an explicit statement to this end) and in any case it should also be compatible with the Minimalist world.

# **24 Conclusion**

The analyses discussed in this book show a number of similarities. All frameworks use complex categories to describe linguistic objects. This is most obvious for GPSG, LFG, HPSG, CxG and FTAG, however, GB/Minimalism and Categorial Grammar also talk about NPs in third person singular and the relevant features for part of speech, person and number form part of a complex category. In GB, there are the feature N and V with binary values (Chomsky 1970: 199), Stabler (1992: 119) formalizes *Barriers* with featurevalue pairs and Sauerland & Elbourne (2002: 290–291) argue for the use of feature-value pairs in a Minimalist theory. Also, see Veenstra (1998: ) for a constraint-based formalization of a Minimalist analysis using typed feature descriptions. Dependency Grammar dialects like Hellwig's Dependency Unification Grammar also use feature-value pairs (Hellwig 2003: 612).

Furthermore, there is a consensus in all current frameworks (with the exception of Construction Grammar and Dependency Grammar) about how the sentence structure of German should be analyzed: German is an SOV and V2 language. Clauses with verb-initial order resemble verb-final ones in terms of structure. The finite verb is either moved (GB) or stands in a relation to an element in verb-final position (HPSG). Verb-second clauses consist of verb-initial clauses out of which one constituent has been extracted. It is also possible to see some convergence with regard to the analysis of the passive: some ideas originally formulated by Haider (1984, 1985b, 1986a) in the framework of GB have been adopted by HPSG. Some variants of Construction Grammar also make use of a specially marked 'designated argument' (Michaelis & Ruppenhofer 2001: 55–57).

If we consider new developments in the individual frameworks, it becomes clear that the nature of the proposed analyses can sometimes differ drastically. Whereas CG, LFG, HPSG and CxG are surface-oriented, sometimes very abstract structures are assumed in Minimalism and in some cases, one tries to trace all languages back to a common base structure (Universal Base Hypothesis).<sup>1</sup> This kind of approach only makes sense if one assumes that there is innate linguistic knowledge about this base structure common to all languages as well as about the operations necessary to derive the surface structures. As was shown in Chapter 13, all arguments for the assumption of innate linguistic knowledge are either not tenable or controversial at the very least. The acquisition of linguistic abilities can to a large extent receive an input-based explanation (Section 13.8.3, Section 16.3 and Section 16.4). Not all questions about acquisition have been settled once and for all, but input-based approaches are at least plausible enough for one to be very cautious about any assumption of innate linguistic knowledge.

<sup>1</sup> It should be noted that there are currently many subvariants and individual opinions in the Minimalist community so that it is only possible – as with CxG – to talk about tendencies.

#### 24 Conclusion

Models such as LFG, CG, HPSG, CxG and TAG are compatible with performance data, something that is not true of certain transformation-based approaches, which are viewed as theories of competence that do not make any claims about performance. In MGG, it is assumed that there are other mechanisms for working with linguistic knowledge, for example, mechanisms that combine 'chunks' (fragments of linguistic material). If one wishes to make these assumptions, then it is necessary to explain how chunks and the processing of chunks are acquired and not how a complex system of transformations and transformation-comparing constraints is acquired. This means that the problem of language acquisition would be a very different one. If one assumes a chunk-based approach, then the innate knowledge about a universal transformational base would only be used to derive a surface-oriented grammar. This then poses the question of what exactly the evidence for transformations in a competence grammar is and if it would not be preferable to simply assume that the competence grammar is of the kind assumed by LFG, CG, HPSG, CxG or TAG. One can therefore conclude that constraint-based analyses and the kind of transformational approaches that allow a constraint-based reformulation are the only approaches that are compatible with the current facts, whereas all other analyses require additional assumptions.

A number of works in Minimalism differ from those in other frameworks in that they assume structures (sometimes also invisible structures) that can only be motivated by evidence from other languages. This can streamline the entire apparatus for deriving different structures, but the overall costs of the approach are not reduced: some amount of the cost is just transferred to the UG component. The abstract grammars that result cannot be learned from the input.

One can take from this discussion that only constraint-based, surface-oriented models are adequate and explanatory: they are also compatible with psycholinguistic facts and plausible from the point of view of acquisition.

If we now compare these approaches, we see that a number of analyses can be translated into one another. LFG (and some variants of CxG and DG) differ from all other theories in that grammatical functions such as subject and object are primitives of the theory. If one does not want this, then it is possible to replace these labels with Argument1, Argument2, etc. The numbering of arguments would correspond to their relative obliqueness. LFG would then move closer to HPSG. Alternatively, one could mark arguments in HPSG and CxG with regard to their grammatical function additionally. This is what is done for the analysis of the passive (designated argument).

LFG, HPSG, CxG and variants of Categorial Grammar (Moens et al. 1989, Briscoe 2000, Villavicencio 2002) possess means for the hierarchical organization of knowledge, which is important for capturing generalizations. It is, of course, possible to expand any other framework in this way, but this has never been done explicitly, except in computer implementations and inheritance hierarchies do not play an active role in theorizing in the other frameworks.

In HPSG and CxG, roots, stems, words, morphological and syntactic rules are all objects that can be described with the same means. This then allows one to make generalizations that affect very different objects (see Chapter 23). In LFG, c-structures are

viewed as something fundamentally different, which is why this kind of generalization is not possible. In cross-linguistic work, there is an attempt to capture similarities in the f-structure, the c-structure is less important and is not even discussed in a number of works. Furthermore, its implementation from language to language can differ enormously. For this reason, my personal preference is for frameworks that describe all linguistic objects using the same means, that is, HPSG and CxG. Formally, nothing stands in the way of a description of the c-structure of an LFG grammar using featurevalue pairs so that in years to come there could be even more convergence between the theories. For hybrid forms of HPSG and LFG, see Ackerman & Webelhuth (1998) and Hellan & Haugereid (2003), for example.

If one compares CxG and HPSG, it becomes apparent that the degree of formalization in CxG works is relatively low and a number of questions remain unanswered. The more formal approaches in CxG (with the exception of Fluid Construction Grammar) are variants of HPSG. There are relatively few precisely worked-out analyses in Construction Grammar and no description of German that would be comparable to the other approaches presented in this book. To be fair, it must be said that Construction Grammar is the youngest of the theories discussed here. Its most important contributions to linguistic theory have been integrated into frameworks such as HPSG and LFG.

The theories of the future will be a fusion of surface-oriented, constraint-based and model-theoretic approaches like CG, LFG, HPSG, Construction Grammar, equivalent variants of TAG and GB/Minimalist approaches that will be reformulated as constraintbased. (Variants of) Minimalism and (variants of) Construction Grammar are the most widely adopted approaches at present. I actually suspect the truth to lie somewhere in the middle. The linguistics of the future will be data-oriented. Introspection as the sole method of data collection has proven unreliable (Müller 2007c, Meurers & Müller 2009) and is being increasingly complemented by experimental and corpus-based analyses.

Statistical information and statistical processes play a very important role in machine translation and are becoming more important for linguistics in the narrow sense (Abney 1996). We have seen that statistical information is important in the acquisition process and Abney discusses cases of other areas of language such as language change, parsing preferences and gradience with grammaticality judgments. Following a heavy focus on statistical procedures, there is now a transition to hybrid forms in computational linguistics,<sup>2</sup> since it has been noticed that it is not possible to exceed certain levels of quality with statistical methods alone (Steedman 2011, Church 2011, Kay 2011). The same holds here as above: the truth is somewhere in between, that is, in combined systems. In order to have something to combine, the relevant linguistic theories first need to be developed. As Manfred Pinkal said: "It is not possible to build systems that understand language without understanding language."

<sup>2</sup> See Kaufmann & Pfister (2007) and Kaufmann (2009) for the combination of a speech recognizer with a HPSG grammar.

# **Appendix A: Solutions to the exercises**

# **A.1 Introduction and basic terms**


.

On (1c): theoretically, this could also be a case of extraposition of the relative clause to the postfield. Since *eine Frau, die Peter kennt* is a constituent, however, it is assumed that no reordering of the relative clause has taken place. Instead, we have a simpler structure with *eine Frau, die Peter kennt* as a complete NP in the middle field.

# **A.2 Phrase structure grammars**

	- (2) Tralala → Trulla Trololo

One should bear in mind what the aim of a theory of grammar is. If our goal is to describe the human language capacity, then a grammar with more rules could be better than other grammars with less rules. This is because psycholinguistic research has shown that highly-frequent units are simply stored in our brains and not built up from their individual parts each time, although we would of course be able to do this.

3. The problem here is the fact that it is possible to derive a completely empty noun phrase (see Figure A.1). This noun phrase could be inserted in all positions where

Figure A.1: Noun phrases without a visible determiner and noun

an otherwise filled NP would have to stay. Then, we would be able to analyze sequences of words such as (3), where the subject of *schläft* 'sleeps' is realized by an empty NP:

	- I believe that sleeps

This problem can be solved using a feature that determines whether the left periphery of the N is empty. Visible Ns and N with at least an adjective would have the value '–' and all others '+'. Empty determiners could then only be combined with Ns that have the value '–'. See Netter (1994).

	- (4) interessante interesting Bücher books

If adjectives are combined with NPs, however, it still has to be explained why (5) is ungrammatical.

(5) \* interessante interesting die the Bücher books

For a detailed discussions of this topic, see Müller (2007a: Section 6.6.2).

	- (6) a. interessante interesting [Aufsätze essays und and Bücher] books
		- b. interessante interesting [Aufsätze essays und and Bücher books aus from Stuttgart] Stuttgart

Since adjectives can only be combined directly with nouns, these phrases cannot be analyzed. *Bücher* 'books' or *Bücher aus Stuttgart* 'books from Stuttgart' would be complete NPs. Since it is assumed that coordinated elements always have the same syntactic category, then *Aufsätze* 'essays' would have to be an NP. *Aufsätze und Bücher* and *Aufsätze und Bücher aus Stuttgart* would then also be NPs and it remains unexplained how an adjective can be combined with this NP. Because of (5), we must rule out analyses that assume that full NPs combine with adjectives.

See Chapter 19 for a general discussion of empty elements.

6. If a specific determiner or just any determiner were to be combined with an adjective to form a complete NP, there would be no room for the integration of postnominal modifiers like modifying genitives, PPs and relative clauses. For PPs and relative clauses, analyses have been suggested in which these postnominal modifiers attach to complete NPs (Kiss 2005), but modifying genitives usually attach to smaller units. But even if one admits postnominal modifiers to attach to complete NPs, one cannot account for the iteration of adjectives and for arguments that depend on the elided noun.

So, the simplest way to cope with the German data is the assumption of an empty noun. Alternatively one could assume that an adjective is directly projected to an N. This N then can be modified by further adjectives or postnominal modifiers. The N is combined with a determiner to form a full NP. For phrases that involve elided relational nouns, one would have to assume the projection of an argument like *vom Gleimtunnel* 'of the Gleimtunnel' to N. The N could be further modified or combined with a determiner directly.

#### A Solutions to the exercises

	- (7) der the auf on seinen his Sohn son sehr very stolze proud Vater father 'the father very proud of his son'

One would either have to allow for specifiers to be combined with their heads before complements or allow crossing lines in trees. Another assumption could be that German is like English, however then adjectival complements would have to be obligatorily reordered before their specifier. For a description of this kind of reordering, see Chapter 3. See Section 13.1.2 for a discussion of X-Theory.

	- (8) a. Der the.nom Mann man hilft helps der the.dat Frau. woman 'The man helps the woman.'
		- b. Er he.nom gibt gives ihr her.dat das the.acc Buch. book 'He gives her the book.'
		- c. Er he.nom wartet waits auf on ein a Wunder. miracle.acc 'He is waiting for a miracle.'
	- (9) a. \* Der the.nom Mann man hilft helps er. he.nom
		- b. \* Er he.nom gibt gives ihr her.dat the.acc den book Buch.

In order to rule out the last two sentences, the grammar has to contain information about case. The following grammar will do the job:

	- b. s → np(nom), v(nom\_dat\_acc), np(dat), np(acc)
	- c. s → np(nom), v(nom\_pp\_auf), pp(auf,acc)
	- d. pp(Pform,Case) → p(Pform,Case), np(Case)
	- e. np(Case) → d(Case), n(Case)
	- f. v(nom\_dat) → hilft
	- g. v(nom\_dat\_acc) → gibt
	- h. v(nom\_pp\_auf) → wartet

# **A.3 Transformational Grammar – Government & Binding**

dass that der Hai the shark \_ attackiert \_ attacked wir- -d is

# **A.4 Generalized Phrase Structure Grammar**

In order to analyze the sentences in (11), one requires a rule for transitive verbs and a metarule for the extraction of an element. Furthermore, rules for the combination of elements in the noun phrase are required.

	- b. [dass] that ihn it der the Mann man liest reads 'that the man reads it'
	- c. Der the Mann man liest reads ihn. it 'The man reads it.'

It is possible to analyze the sentences in (11a,b) using the rules in (12) and the lexical entries in (13).

	- b. N2 → Det[case CAS], H1[case CAS]
	- c. N1 → H[27]
	- b. N[27] → Mann
	- c. V[6, +fin] → liest
	- d. N2[case acc] → ihn

The rules (12b,c) correspond to X-rules that we encountered in Section 2.4.1. They only differ from these rules in that the part of speech of the head is not given on the righthand side of the rule. The part of speech is determined by the Head Feature Convention. The part of speech of the head is identical to that on the left-hand side of the rule, that is, it must be N in (12b,c). It also follows from the Head Feature Convention that the whole NP has the same case as the head and therefore does not have to be mentioned additionally in the rule above. 27 is the subcat value. This number is arbitrary.

In order for the verb to appear in the correction position, we need linearization rules:

$$\begin{array}{cc} \text{(14)} & \text{V[+\text{MC}]} < \text{X} \\ & \text{X} \\ & & < \text{V[--\text{MC}]} \end{array}$$

The fact that the determiner precedes the noun is ensured by the following LP-rule:

(15) Det < X

The Extraction Metarule in (16) is required in order to analyze (11c):

(16) V3 → W, X ↦→ V3/X → W

Among others, this metarule licenses the rule in (17) for (12a):

(17) V3/N2[case nom] → H[6], N2[case acc]

The rule in (18) is used to bind off long-distance dependencies.

(18) V3[+fin] → X[+top], V3[+mc]/X

The following linearization rule ensures that the +top-constituent precedes the sentence in which it is missing:

(19) [+top] < X

Figure A.2 shows the structure licensed by the grammar. In sum, one can say that the

Figure A.2: Analysis of *Der Mann liest ihn.* 'The man reads it.'

grammar that licenses the sentences in (11) should have (at least) the following parts:

1. ID rules:

$$\begin{aligned} \text{(20)} \quad \text{a. } \mathrm{V3} &\rightarrow \mathrm{H}[\mathfrak{h}], \,\mathrm{N2}[\text{case norm}], \,\mathrm{N2}[\text{case acc}] \\ \text{b. } \mathrm{N2} &\rightarrow \mathrm{Det}[\text{case CAS}], \,\mathrm{H1}[\text{case CAS}] \\ \text{c. } \mathrm{N1} &\rightarrow \mathrm{H}[\text{27}] \end{aligned}$$

2. LP rules:

(21) V[+mc] < X X < V[−mc] Det < X [+top] < X

3. Metarules:

$$\begin{array}{c} \text{(22)} \quad \text{V3} \rightarrow \text{W}, \text{X} \leftrightarrow\\ \text{V3/X} \rightarrow \text{W} \end{array}$$

	- (23) a. Det[case nom] → der
		- b. N[27] → Mann
		- c. V[6, +fin] → liest
		- d. N2[case acc] → ihn

# **A.5 Feature descriptions**

1. For the class [+V], the type *verbal* is assumed with the subtypes *adjective* and *verb*. For the class [−V] there is the type *non-verbal* and its subtypes *noun* and *preposition*. This is analogous for the N values. The corresponding hierarchy is given in the following figure:

2. Lists can be described using recursive structures that consist of both a list beginning and a rest. The rest can either be a non-empty list (*ne\_list*) or the empty list (*e\_list*). The list ⟨ *a*, *b*, *c* ⟩ can be represented as follows:

$$\begin{array}{c} \begin{bmatrix} ne\\_list\\ \stackrel{\scriptstyle \text{FIRST}}{\text{FIRST}} & a\\ & \begin{bmatrix} ne\\_list\\ \stackrel{\scriptstyle \text{FIRST}}{\text{FIRST}} & b\\ \stackrel{\scriptstyle \text{RES}}{\text{RES}} & \begin{bmatrix} ne\\_list\\ \stackrel{\scriptstyle \text{FIRST}}{\text{FIRST}} & c\\ \stackrel{\scriptstyle \text{RES}}{\text{RES}} & e \end{bmatrix} \end{bmatrix} \end{array} \end{array} \right] \tag{24}$$

3. If we extend the data structure in (24) by two additional features, it is possible to do without *append*. The keyword is *difference list*. A difference list consists of a list and a pointer to the end of the list.

#### A Solutions to the exercises

$$\begin{array}{c} \begin{bmatrix} \text{diff-list} \\ \\ \begin{bmatrix} ne\\_list \\ \text{IIST} \\ \text{I} \\ \text{RST} \\ \end{bmatrix} \begin{bmatrix} a \\ \begin{bmatrix} ne\\_list \\ \text{FIRST} \\ \text{RST} \\ \text{RST} \\ \text{RST} \end{bmatrix} \begin{bmatrix} b \\ \text{I} \\ \text{I} \end{bmatrix} \end{bmatrix} \end{array} \end{array} \left| \begin{array}{c} \begin{bmatrix} \text{R} \\ \text{I} \\ \text{I} \\ \text{RST} \\ \text{I} \end{bmatrix} \begin{bmatrix} \text{s} \\ \text{I} \\ \text{I} \end{bmatrix} \right\} \right| \right| $$

Unlike the list representation in (24), the rest value of the end of the list is not *e\_list*, but rather simply *list*. It is then possible to extend a list by adding another list to the point where it ends. The concatenation of (25) and (26a) is (26b).

$$\begin{array}{ll} \text{(26)} & \text{a.} \begin{bmatrix} \text{diff-list} \\ \text{LST} & \begin{bmatrix} ne\_{-} \text{list} \\ \text{FIRST} & c \\ \text{RST} & \begin{bmatrix} \text{2} \end{bmatrix} \text{list} \end{bmatrix} \end{array} \end{array}$$

$$\begin{array}{ll} \begin{bmatrix} \text{diff-list} \\ \begin{bmatrix} ne\_{-} \text{list} \\ \text{FIRST} & a \\ \text{LST} & \begin{bmatrix} ne\_{-} \text{list} \\ \text{RST} & \begin{bmatrix} \text{RST} & b \\ \text{RST} & \begin{bmatrix} \text{R}e\_{-} \text{list} \\ \text{RST} & \begin{bmatrix} \text{ne}\_{-} \text{list} \\ \text{RST} & c \\ \text{RST} & \begin{bmatrix} \text{R} \text{list} \\ \text{RST} & c \\ \text{RST} & \begin{bmatrix} \text{2} \end{bmatrix} \text{list} \end{bmatrix} \end{bmatrix} \end{array} \end{array} \right]$$

In order to combine the lists, the list value of the second list has to be identified with the last value of the first list. The last value of the resulting list then corresponds to the last value of the second list ( 2 in the example.)

Information about the encoding of difference lists can be found by searching for the keywords *list*, *append*, and *feature structure*. In the search results, one can find pages on developing grammars that explain difference lists.

# **A.6 Lexical Functional Grammar**

	- (27) *kannte* V (↑ PRED) = 'KENNEN⟨SUBJ, OBJ ⟩' (↑ SUBJ AGR CAS = NOM) (↑ OBJ AGR CAS = ACC) (↑ TENSE) = PAST
	- (28) Dem the.dat Kind child hilft helps Sandy. Sandy.nom 'Sandy helps the child.'

The analysis is parallel to the analysis in Figure 7.5 on page 244. The difference is that the object is in the dative and not in the accusative. The respective grammatical function is OBJ rather than OBJ.

The necessary c-structure rules are given in (29):

$$\begin{array}{ccccc} \text{(29)} & \text{a. } \text{VP} & \rightarrow & \text{NP} & \text{VP} \\ & & & \text{(\uparrow SUBJ | OBJ | OBJ\_{\theta}) = \downarrow} & \uparrow = \downarrow \\ & & & \text{\uparrow} = \downarrow & \\ & & & \uparrow = \downarrow & \\ \text{c. } \text{ C'} & \rightarrow & \text{C} & \text{VP} \\ & & & \uparrow = \downarrow & \uparrow = \downarrow \\ \text{d. } \text{ CP} & \rightarrow & \text{XP} & \text{C'} \\ & & & \text{(\uparrow DF)} = \downarrow & \uparrow = \downarrow \\ & & & \text{(\uparrow DF)} = \langle \uparrow \text{COMP\* GF} \rangle \end{array}$$

These rules allow two f-structures for the example in question: one in which the NP *dem Kind* 'the child' is the topic and another in which this NP is the focus. Figure A.3 shows the analysis with a topicalized constituent in the prefield.

A Solutions to the exercises

Figure A.3: Analysis of verb second

# **A.7 Categorial Grammar**

1. The analysis of *The children in the room laugh loudly.* is given in Figure A.4.


Figure A.4: Categorial Grammar analysis of *The children in the room laugh loudly.*

2. The analysis of *the picture of Mary* is given in Figure A.5. n/pp corresponds to N<sup>0</sup> , n corresponds to N and np corresponds to NP.

$$
\begin{array}{cc}
\frac{the}{np/n} & \frac{picture}{n/pp} & \frac{of}{pp/np} & \frac{Mary}{np} \\
\hline & & \frac{pp}{np} & \\
\hline & & & n \\
\end{array}
$$

Figure A.5: Categorial Grammar analysis of *the picture of Mary*

#### 1. The solution is: *head-argument-phrase* phon ⟨ *Max lacht* ⟩ synsem|loc cat head 1 spr ⟨⟩ comps 2 ⟨⟩ cont ind 3 rels ⟨ 4 *,* 5 ⟩ head-dtr *word* phon ⟨ *lacht* ⟩ synsem|loc cat head 1 *verb* initial − vform *fin* spr ⟨⟩ comps 2 ⊕ ⟨ 6 ⟩ cont ind 3 *event* rels \* 4 *lachen* event 3 agens 7 + non-head-dtrs \* *word* phon ⟨ *Max* ⟩ synsem 6 loc cat head *noun* cas *nom* spr ⟨⟩ comps ⟨⟩ cont ind 7 per *3* num *sg* gen *mas* rels \* 5 *named* name *max* inst 7 + +

# **A.8 Head-Driven Phrase Structure Grammar**

 2. An analysis of the difference in (30) has to capture the fact that the case of the adjective has to agree with that of the noun. In (30a), the genitive form of *interessant* 'interesting' is used, whereas (30b) contains a form that is incompatible with the genitive singular.

#### A Solutions to the exercises


interesting.nom

(31) shows the cat value of *interessanten*.

one.gen

(31) cat value of *interessanten* 'interesting' with case information:

```

head

         adj
         mod N [case 1 ]
         case 1 gen

spr ⟨⟩
comps ⟨⟩
```
The structure sharing of the case value of the adjective with the case value of the N under mod identifies the case values of the noun and the adjective. *interessanten* can therefore be combined with *Romans*, but not with *Roman*. Similarly, *interessanter* can only be combined with the nominative *Roman*, but not with the genitive *Romans*.

novel.gen

For a refinement of the analysis of agreement inside the noun phrase, see Müller (2007a: Abschnitt 13.2).

# **A.9 Construction Grammar**

Idioms can be found by reading the newspaper carefully. The less exciting method is to look them up a dictionary of idioms such as the Free Dictionary of Idioms and Phrases<sup>1</sup> .

<sup>1</sup> http://idioms.thefreedictionary.com/, 2018-02-20.

# **A.10 Dependency Grammar**

#### A Solutions to the exercises

# **A.11 Tree Adjoining Grammar**

The elementary trees in Figure A.6 are needed for the analysis of (32).

(32) der the.nom dem the.dat König king treue loyal Diener servant 'the servant loyal to the king'

Figure A.6: Elementary trees for *der dem König treue Diener* 'the servant loyal to the king'

By substituting the tree for *dem* 'the' in the substitution node of *König* 'king', one then

arrives at a full NP. This can then be inserted into the substitution node of *treue* 'loyal'. Similarly, the tree for *der* 'the' can be combined with the one for *Diener*. One then has both of the trees in Figure A.7.

Figure A.7: Trees for *der dem König treue* and *der Diener* 'the servant loyal to the king' after substitution

The adjective tree can then be adjoined to the N′ -node of *der Diener*, which yields the structure in Figure A.8 on the next page.

#### A Solutions to the exercises

Figure A.8: Result of adjunction of the AP to the N′ -node


*MI*, 68–88. Stanford, CA: CSLI Publications. http://csli- publications.stanford.edu/ LFG/19/papers/lfg14asudehetal.pdf (19 November, 2020).


319–326. University of Pennsylvania, Philadelphia: Association for Computational Linguistics. DOI: 10.3115/1073083.1073137.




Tilman Becker, Giorgio Satta & K. Vijay-Shanker (eds.), *Proceedings of the Fourth International Workshop on Tree Adjoining Grammars and Related Frameworks (TAG+4)*, 21– 24. Philadelphia, PA: University of Pennsylvania, Institute for Research in Cognitive Science.



*ing Language in Logic Workshop (CoNLL-2000)*, 91–94. Lisbon, Portugal: Association for Computational Linguistics. DOI: 10.3115/1117601.1117621.


Suntec, Singapore: Association for Computational Linguistics. https://www.aclweb. org/anthology/events/ws-2009/#w09-26 (18 August, 2020).


*FGVienna: The 8th Conference on Formal Grammar*, 45–64. Stanford, CA: CSLI Publications. http://csli-publications.stanford.edu/FG/2003/crysmann.pdf (10 February, 2021).



*ference, Udayana University*, 177–197. Stanford, CA: CSLI Publications. http : / / csli publications.stanford.edu/LFG/17/ (10 February, 2021).




guistics and Communication Science 42), 974–1003. Berlin: Mouton de Gruyter. DOI: 10.1515/9783110363708-005.


Gibson, Edward & Kenneth Wexler. 1994. Triggers. *Linguistic Inquiry* 25(3). 407–454.



Grewendorf, Günther. 1989. *Ergativity in German* (Studies in Generative Grammar 35). Dordrecht: Foris Publications. DOI: 10.1515/9783110859256.



*Grammar, University of Washington*, 60–80. Stanford, CA: CSLI Publications. DOI: 10. 21248/hpsg.2011.4.




Siegel (eds.), *Proceedings of the ESSLLI 2003 Workshop "Ideas and Strategies for Multilingual Grammar Development"*, 41–48. Vienna, Austria.



schen. In Stefan Müller, Marga Reis & Frank Richter (eds.), *Beiträge zur deutschen Grammatik: Gesammelte Schriften von Tilman N. Höhle*, 2nd edn. (Classics in Linguistics 5), 381–416. Berlin: Language Science Press, 2019. DOI: 10.5281/zenodo.2588365.


*Cambridge grammar of the English language*, 1031–1096. Cambridge, UK: Cambridge University Press. DOI: 10.1017/9781316423530.013.



*Proceedings of the Thirty-Seventh Annual Meeting of the ACL*, 535–541. College Park, MD: Association for Computational Linguistics. DOI: 10.3115/1034678.1034758.


plan, John T. Maxwell III & Annie Zaenen (eds.), *Formal issues in Lexical-Functional Grammar* (CSLI Lecture Notes 47), 29–130. Stanford, CA: CSLI Publications, 1995.

Kaplan, Ronald M. & John T. Maxwell III. 1996. *LFG grammar writer's workbench*. Tech. rep. Xerox PARC.




*processing* (Studies in Theoretical Psycholinguistics 15), 223–258. Dordrecht: Kluwer Academic Publishers. DOI: 10.1007/978-94-017-1980-3\_8.


*grammars and related formalisms*, 1–10. Umeå, Sweden: Association for Computational Linguistics.


Liu, Haitao. 2009. *Dependency Grammar: From theory to practice*. Beijing: Science Press.


*guage variation* (Trends in Linguistics. Studies and Monographs 231), 361–403. Berlin: Mouton de Gruyter.





*The handbook* (Empirically Oriented Theoretical Morphology and Syntax 9), 1497– 1553. Berlin: Language Science Press. DOI: 10.5281/zenodo.5599882.


*scription of Slavic Languages – FDSL IV held at Potsdam University, November 28–30, 2001*, 271–285. Frankfurt am Main: Peter Lang.



*nical University of Athens*, 333–353. Stanford, CA: CSLI Publications. http : / / csli publications.stanford.edu/LFG/7/ (10 February, 2021).


*Conference on Language Resources and Evaluation, LREC 2012*, 3849–3852. Istanbul, Turkey: ELRA.







*Formalisms (TAG+12)*, 1–17. Düsseldorf. https://www.aclweb.org/anthology/W16-3301 (10 February, 2021).


*of the Fourteenth International Congress of Linguists, Berlin/GDR, August 10–15, 1987*, vol. 3, 2331–2336. Berlin: Akademie Verlag.



# **Name index**

Abbott, Barbara, 610 Abeillé, Anne, xvii–xix, 121, 162, 163, 178, 179, 312, 419, 420, 565, 566, 655, 685 Abney, Steven P., 29, 118–120, 128, 491, 507, 523, 543, 705 Abraham, Werner, 12, 153 Abzianidze, Lasha, 268 Ackerman, Farrell, 118, 540, 690, 705 Adams, Marianne, 469 Ades, Anthony E., 528 Adger, David, x, xvi, 127, 132–134, 137–142, 150, 166, 173, 180, 562, 593, 595 Ágel, Vilmos, 369, 417 Aguado-Orea, Javier, 499, 544 Ahmed, Mohamed Ben, 419 Ahmed, Reaz, 267 Ajdukiewicz, Kazimierz, 29, 160, 163, 247 Alencar, Leonel F. de, 223, 224, 246 Alqurashi, Abdulrahman, 572 Alsina, Alex, 37, 316, 458, 601, 606, 613 Altmann, Hans, 48 Ambridge, Ben, 467–469, 494, 499, 505, 550 Anderson, John M., 369 Anderson, Stephen R., 233 Andrews, Avery, 224 Aoun, Joseph, 111, 120 Arad Greshler, Tali, 267, 268 Arends, Jacques, 483 Arka, I Wayan, 224 Arnold, Doug, 571 Arnold, Jennifer E., 527 Asher, Nicholas, 223 Askedal, John Ole, 43 Asudeh, Ash, xv, 232, 244, 246, 309, 316, 601, 613–616, 619, 623, 681, 687 Attardi, Giuseppe, 370 Attia, Mohammed A., 223 Avgustinova, Tania, 269 Bach, Emmon, 62, 102

Backofen, Rolf, 267 Bahrani, Mohammad, 183 Baker, Carl Lee, 486, 492 Baker, Mark C., 454, 458, 462, 534 Baldridge, Jason, x, 163, 247, 248, 253<sup>5</sup> , 255, 259, 265, 301, 580 Balla, Amar, 267 Ballweg, Joachim, 552 Baltin, Mark, 462, 463 Bangalore, Srinivas, 417 Bannard, Colin, 317, 570 Bar-Hillel, Yehoshua, 571, 573 Bargmann, Sascha, 415, 680 Barry, Guy, 265 Bartsch, Renate, 104, 161 Barwise, Jon, 284 Baschung, K., 247 Bates, Elizabeth A., 479, 481, 509 Baumgärtner, Klaus, 369, 373, 403 Bausewein, Karin, 157, 158, 290 Bayer, Josef, 101, 117 Beavers, John, 162, 163, 247, 307, 343, 656, 660 Bech, Gunnar, 47, 263, 425, 438 Beck, Sigrid, 101 Becker, Tilman, 96, 425, 427, 436 Beermann, Dorothee, 268 Beghelli, Filippo, 147 Behrens, Heike, 367, 479, 548, 657 Bellugi, Ursula, 485 Bender, Emily M., x, 6, 27, 119, 268–270, 285, 310, 331, 332, 335, 338, 529, 564, 567, 572 Bentzen, Kristine, 147 Bergen, Benjamin K., 96, 315, 326, 327, 342, 344, 345, 546, 619, 641 Berman, Judith, 37, 75, 101, 102, 223, 235– 240, 243, 246, 435, 458, 460 Berwick, Robert C., 160, 172, 176, 493, 535, 536, 551

Bes, G. G., 247 Bever, Thomas G., 524 Bialystok, Ellen, 481 Bick, Eckhard, 371 Bickerton, Derek, 482 Bierwisch, Manfred, 6, 83, 102, 103, 117, 119, 473 Bildhauer, Felix, x, 17, 150, 154, 269, 271, 312, 327, 351, 399 Bird, Steven, 271, 338 Bishop, Dorothy V. M., 484–486 Bjerre, Tavs, 662 Blackburn, Patrick, 81, 513, 580 Błaszczak, Joanna, 147, 181 Blevins, James P., 323 Block, Hans-Ulrich, 223 Blom, Corrien, 650, 661 Bloom, Paul, 118, 535, 537 Blutner, Reinhard, 579 Boas, Hans C., 331, 357, 366, 602 Bobaljik, Jonathan, 148 Bod, Rens, 493, 499, 501, 664, 665 Bögel, Tina, 224 Bohnet, Bernd, 370 Bolc, Leonard, 267 Bond, Francis, 267, 268 Booij, Geert, 616, 661, 664 Borer, Hagit, 155, 608, 625, 631, 635, 640 Börschinger, Benjamin, 223 Borsley, Robert D., xvii–xix, 123, 125, 150, 161, 162, 172, 180, 273, 297, 312, 340, 367, 406, 552, 572, 677 Bos, Johan, 81, 580 Bosse, Solveig, 147 Bouchard, Lorne H., 183 Boukedi, Sirine, 267 Boullier, Pierre, 223 Bouma, Gerlof, 247 Bouma, Gosse, 168, 170, 172, 247, 268, 288, 299, 307, 326, 353, 385, 391, 513, 571, 572, 586, 596, 647, 687 Braine, Martin D. S., 470, 506 Brame, Michael, 29 Branco, António, 268 Brants, Sabine, 465 Bredt, Thomas H., 120 Bresnan, Joan, 101, 125, 157, 162, 172, 223, 226,

228, 233–236, 238, 246, 323, 352, 455, 458, 551, 563, 568, 572, 585, 610, 614, 620, 621, 628 Brew, Chris, 269, 363 Briscoe, Ted, 123, 183, 248, 255, 265, 363, 470, 587, 605, 625, 704 Bröker, Norbert, 377 Brosziewski, Ulf, 132 Brown, Roger, 507 Bruening, Benjamin, 29, 147, 174, 617 Bryant, John, 316 Budde, Monika, x, 671 Bühler, Karl, 472<sup>25</sup> Bungeroth, Jan, 269 Burzio, Luigi, 93, 113 Buscha, Joachim, 369 Busemann, Stephan, 205, 267 Bußmann, Hadumod, 13 Butt, Miriam, 223, 224, 660, 687 Cahill, Aoife, 224 Calder, Jo, 267 Calder, Jonathan, 247 Callmeier, Ulrich, 267 Candito, Marie-Hélène, 320, 419, 420, 422, 430, 642 Cappelle, Bert, 326, 349, 619, 660, 661, 663, 664 Capstick, Joanne, 267 Cargo, Martha B., 485 Carlson, Gregory N., 657 Carnie, Andrew, 100 Carpenter, Bob, 207, 221, 247, 267 Carroll, John, 183 Cavar, Damir, 523 Çetinoğlu, Özlem, 224 Chang, Nancy, 96, 315, 316, 326, 327, 342, 344, 345, 546, 619, 641 Chatterjee, Sudipta, 247, 248 Chaudhuri, B. B., 223 Chaves, Rui P., 178, 179, 391, 572 Chesi, Cristiano, 4 Choi, Hye-Won, 236, 237 Chomsky, Noam, 4, 6, 75, 83–87, 89, 92–94, 96, 97, 100, 110, 111, 118, 120, 123, 127, 129, 135, 136, 144–147, 154– 157, 159–162, 165, 170–172, 175–

178, 181–183, 272, 315, 323, 359, 435, 436, 447, 451, 453, 454, 456, 458, 460, 462, 470–473, 475, 477– 479, 482, 484, 486, 490, 491, 493, 506, 507, 521, 524, 527, 529, 530, 533, 539, 541–543, 551, 558, 572, 612, 620, 629–631, 703 Chouinard, Michelle M., 507 Christiansen, Morten H., 469 Chrupala, Grzegorz, 224 Chung, Chan, 685 Chung, Sandra, 469 Church, Kenneth, 705 Cinque, Guglielmo, 145–147, 150, 152, 175, 470, 471, 477, 700 Citko, Barbara, 157, 159 Clark, Alexander, 506 Clark, Eve V., 507 Clark, Herbert H., 526 Clark, Stephen, 247, 248 Clément, Lionel, 223, 320, 642 Clifton Jr., Charles, 523, 530 Coch, José, 371 Cohen, Shay B., 177 Cole, Jennifer, 120, 523 Comrie, Bernard, 158, 163, 290 Cook, Philippa, x, 17, 38, 154, 223, 351, 399, 468 Cooper, Robin, 284 Coopmans, Peter, 571 Copestake, Ann, 177, 267, 268, 276, 286, 319, 331, 360, 363, 365, 424, 580, 587, 605, 625 Corluy, A., 247 Correa, Nelson, 120 Costa, Francisco, 268 Covington, Michael A., 371, 402 Crabbé, Benoit, 419 Crain, Stephen, 460, 461, 469, 498, 505, 528 Cramer, Bart, 269 Crocker, Matthew, 120–123 Croft, William, 315, 316, 458, 471, 546, 601, 602, 607, 643, 645, 658, 681, 697 Crouch, Richard, 223 Crysmann, Berthold, 17, 104, 162, 163, 268, 271, 307, 343, 357, 656, 662 Csernyi, Gábor, 224

Culicover, Peter W., xix, 83, 161, 166, 460, 478, 525, 542, 573, 601, 647–649, 660 Culy, Christopher, 203, 419, 477, 551 Curran, James R., 247 Curtiss, Susan, 481 Da Sylva, Lyne, 183 Dąbrowska, Ewa, 315, 367, 484, 486, 497, 545, 601 Dahl, Östen, 403, 412, 470 Dahllöf, Mats, 267, 268 Dale, Robert, 223 Dalrymple, Mary, 37, 101, 224, 225, 229–232, 236, 244, 246, 309, 316, 377, 458, 461, 558, 572, 578, 613, 681, 687 Davidson, Donald, 625 Davis, Anthony R., 286, 320, 642 De Beule, Joachim, 316 De Kuthy, Kordula, 14, 17, 119, 171, 268, 327, 351, 468, 552, 553 de Saussure, Ferdinand, 3, 473 Declerck, Thierry, 267 Dellert, Johannes, 419 Delmonte, Rodolfo, 223, 224 Demberg, Vera, 359 Demske, Ulrike, 29 den Besten, Hans, 117, 161 Deppermann, Arnulf, 367, 487, 616 Derbyshire, Desmond C., 475 Devlin, Keith, 284 Dhonnchadha, E. Uí, 371 Diagne, Abdel Kader, 267 Diesing, Molly, 101 Dini, Luca, 267 Dione, Cheikh Mouhamadou Bamba, 224 Dipper, Stefanie, 223 Donati, Caterina, 157 Donohue, Cathryn, 307 Doran, Christine, 419 Doran, Robert W., 120 Dorna, Michael, 267 Dörre, Jochen, 223, 267 Dowty, David, 6, 92, 161, 248, 251, 349, 579, 585, 608, 614, 625, 630, 634, 654 Dras, Mark, 223 Drellishak, Scott, 269

#### Name index

Drosdowski, Günther, 25, 41 Dryer, Matthew S., 103, 454, 455, 697 Duden, 43 Dürscheid, Christa, 48, 117, 552 Dyvik, Helge, xi, 224 É. Kiss, Katalin, 148 Egg, Markus, 579, 581 Eichinger, Ludwig M., 417 Eikmeyer, Hans-Jürgen, 247 Eisele, Andreas, 223 Eisenberg, Peter, x, 13, 23, 25, 28, 33, 34, 37, 42, 48, 49, 64, 72, 161, 272, 490 Elbourne, Paul, 116, 176, 531, 703 Ellefson, Michelle R., 469 Elman, Jeffrey L., 481, 483, 484, 494, 498, 509 Embick, David, 625 Emerson, Guy, 270 Emirkanian, Louisette, 183 Engdahl, Elisabet, 351 Engel, Ulrich, 369, 372, 374, 375, 377, 378, 382, 408, 417, 553, 572 Epstein, Samuel David, 160, 172 Erbach, Gregor, 267 Ernst, Thomas, 152, 153 Eroms, Hans-Werner, 96, 369, 374, 377–379, 381–383, 389, 395, 396, 414, 417, 572 Erteschik-Shir, Nomi, 466–469 Estigarribia, Bruno, 499 Estival, Dominique, 267 Evang, Kilian, 419 Evans, Nicholas, 455, 458, 461, 470, 471, 477, 701 Evans, Roger, 183 Everett, Daniel L., 474, 476 Evers, Arnold, 117, 119, 639 Evert, Stefan, x Faaß, Gertrud, 224 Fabregas, Antonio, 159 Falk, Yehuda N., 122 Fan, Zhenzhen, 267, 268 Fang, Ji, 224 Fanselow, Gisbert, x, 83, 100, 110, 115–117, 119, 120, 125, 154, 170, 171, 247, 255,

301, 302, 401, 441, 458, 461, 468, 482, 523, 552, 597, 701

Feldhaus, Anke, 102 Feldman, Jerome, 488 Felix, Sascha W., 83, 100, 110, 125 Filimonova, Elena, 453 Fillmore, Charles J., 68, 79, 122, 172, 315–319, 327–330, 342, 497, 566, 573, 605, 607, 642 Fischer, Ingrid, 565 Fischer, Kerstin, 367 Fischer, Klaus, 369 Fisher, Simon E., 484–486 Fitch, W. Tecumseh, 86, 145, 447, 462, 471, 473–475, 477, 478, 484, 486, 506 Flickinger, Dan, 119, 177, 212, 267–269, 276, 286, 311, 319, 331, 335, 338, 349, 424, 564, 580, 588, 618 Fodor, Janet Dean, 522<sup>2</sup> , 534, 535, 539, 540, 542 Fodor, Jerry A., 524 Fokkens, Antske, 268, 269 Fong, Sandiway, x, 121, 176 Fordham, Andrew, 121–123 Forst, Martin, 37, 223, 458 Fortmann, Christian, 552 Fourquet, Jean, 102 Fouvry, Frederik, 269 Fox Tree, Jean E., 526 Fraj, Fériel Ben, 419 Frank, Anette, 102, 223, 580 Frank, Robert, 419 Franks, Steven, 496, 533 Frazier, Lyn, 522, 530 Freidin, Robert, 120, 183, 462, 701 Freudenthal, Daniel, 499, 543–545 Frey, Werner, 114, 117, 123, 153, 160, 167, 223, 229, 310, 552 Fried, Mirjam, 315, 317 Friederici, Angela D., 481, 484 Friedman, Joyce, 120 Fries, Norbert, 290 Fukui, Naoki, 101 Fukumochi, Yasutomo, 371 Futrell, Richard, 474 Gaifman, Haim, 373, 402, 403 Gallmann, Peter, xi, 125

Gardent, Claire, 223

Gardner, R. Allen, 483 Garrett, Merrill F., 524 Gärtner, Hans-Martin, x, 12, 147, 160, 167, 181, 552 Gazdar, Gerald, 120, 122, 123, 125, 163, 171, 178, 183, 184, 186, 190, 191, 195– 197, 201, 205, 311, 323, 354, 511, 513, 572 Geach, Peter Thomas, 171, 203, 685 Geißler, Stefan, 268 George, Marie St., 485 Gerdemann, Dale, 267 Gerdes, Kim, x, 371, 376, 419 Gergel, Remus, 101 Gerken, LouAnn, 537 Ghayoomi, Masood, 268, 694 Gibson, Edward, 301, 437, 454, 469, 474, 522, 523, 534–536, 639 Gillis, Steven, 543, 544 Ginsburg, Jason, 121, 176 Ginzburg, Jonathan, xvi, 122, 163, 169, 172, 276, 284, 309, 404, 413, 487, 497, 499, 542, 676 Giorgolo, Gianluca, xv, 619, 681 Gipper, Helmut, 41 Glauert, John, 269 Gobet, Fernand, 499, 543–545 Godard, Danièle, xvii, 685 Gold, Mark E., 487 Goldberg, Adele E., x, 315–317, 326, 347, 349, 366, 367, 467–469, 507, 510, 543, 546, 550, 570, 573, 601–605, 607, 608, 613, 616, 650, 657, 658, 660, 661, 681, 701 Gopnik, Myrna, 485 Gordon, Peter, 486, 491 Gosch, Angela, 485 Götz, Thilo, 267 Grebe, Paul, 41 Green, Georgia M., 548 Grewendorf, Günther, 83, 93, 94, 98, 100, 105, 108, 113, 117, 119, 125, 127, 146, 180, 290, 459, 552, 571, 648 Gries, Stephan Th., 664, 667 Grimshaw, Jane, x, 157, 469, 660 Grinberg, Dennis, 370 Grohmann, Kleantes K., 151, 180, 473, 474

Groos, Anneke, 158 Groß, Thomas M., x, 34, 158, 199, 378, 381, 383, 394, 400, 411, 413, 417, 573, 597, 598, 662 Grosu, Alexander, 469 Grover, Claire, 183 Grubačić, Emilija, 662 Guillotin, T., 247 Gunji, Takao, 597 Günther, Carsten, 351 Guo, Yuqing, 224 Guzmán Naranjo, Matías, 363 Haddar, Kais, 267 Haegeman, Liliane, 97, 98, 100, 125, 148, 557, 558, 561, 592 Haftka, Brigitta, 94, 153 Hagen, Kristin, 371 Hahn, Michael, 267 Haider, Hubert, 75, 83, 100, 101, 103, 110, 113, 117, 118, 123, 148, 151, 152, 171, 236, 245, 263, 289, 292, 294, 301, 309, 350, 430, 458, 459, 464, 533, 552, 567, 571, 597, 633, 699, 701, 703 Hajičová, Eva, 369 Hakuta, Kenji, 480, 481 Hale, Kenneth, 133, 475, 625 Hall, Barbara C., 120 Han, Chung-hye, 420 Hanlon, Camille, 507 Harley, Heidi, 629 Harlow, Ray, 29 Harman, Gilbert H., 81, 183 Harris, Zellig S., 4 Hasan, K. M. Azharul, 267 Haspelmath, Martin, x, xiii, 95<sup>8</sup> , 451, 471, 534, 697, 698 Haugereid, Petter, 268, 289, 309, 625, 633, 635, 637, 638, 705 Hauser, Marc D., 86, 145, 447, 462, 471, 473, 477, 478, 484, 486, 506 Hausser, Roland, 528 Hawkins, John A., 301, 455, 466, 469 Hays, David G., 370, 371, 373, 403 Heinecke, Johannes, 371 Heinz, Wolfgang, 109, 245, 633 Helbig, Gerhard, 369

Hellan, Lars, 29, 268, 705 Hellwig, Peter, 369, 371, 376, 377, 402, 410, 411, 417, 551, 703 Her, One-Soon, 223, 224 Heringer, Hans Jürgen, 96, 370, 373, 379, 382, 383, 403, 417 Herring, Joshua, 176 Herzig Sheinfux, Livnat, 267, 268 Higginbotham, James, 310, 568 Higinbotham, Dan, 223, 224 Hildebrandt, Bernd, 247 Hinkelman, Elizabeth A., 267 Hinrichs, Erhard W., 119, 171, 178, 268, 293, 311, 378, 438, 587, 596, 599, 639, 685 Hinterhölzl, Roland, 153 Hoberg, Ursula, 439, 595 Hockenmaier, Julia, 247, 248 Hockett, Charles F., 475 Hockey, Beth Ann, 419 Hoeksema, Jack, 650 Hoekstra, Teun, 272 Hoffman, Beryl Ann, 247, 255, 301 Hoffmann, Ludger, 247 Hofman, Ute, 48 Hofmeister, Philip, 469 Höhle, Tilman N., 43, 51, 101, 103, 114, 255, 271, 295, 517, 596, 774 Holler, Anke, 276 Holmberg, Anders, 534 Hornstein, Norbert, 128, 151, 175, 180, 470, 473, 474, 479 Hrafnbjargarson, Gunnar Hrafn, 147 Hróarsdóttir, Þorbjörg, 147 Huang, Wei, 371 Huddleston, Rodney, 263 Hudson Kam, Carla L., 483 Hudson, Carla L., 483 Hudson, Richard, x, 29, 369–373, 377–379, 383–386, 389, 394, 408, 409, 411, 412, 414, 416, 417, 479 Hukari, Thomas E., 571, 572 Humboldt, Wilhelm von, 472 Hunze, Rudolf, 223 Hurford, James R., 479, 498 Hurskainen, Arvi, 371

Ibbotson, Paul, 699 Ichiyama, Shunji, 371 Imrényi, András, 369 Ingram, David, 545 Iordanskaja, L., 371 Islam, Md. Asfaqul, 267 Islam, Muhammad Sadiqul, 267 Jackendoff, Ray, 75, 83, 94, 96, 119, 144, 161, 163, 166, 292, 316, 359, 367, 415, 455, 460, 471, 478, 498, 525, 529, 531, 532, 542, 573, 587, 601, 613, 647–650, 660, 670, 679, 680, 690 Jacobs, Joachim, 102, 196, 254, 367, 669, 690 Jacobson, Pauline, 196, 205, 297 Jaeggli, Osvaldo A., 592 Jäger, Gerhard, 579 Jäppinen, Harri, 371 Johannessen, Janne Bondi, 371 Johnson, David E., 144 Johnson, Jacqueline S., 480, 481 Johnson, Kent, 489, 490 Johnson, Mark, 122, 200, 202, 207, 221, 223, 224, 359, 443, 528–530, 551 Johnson, Mark H., 481, 509 Jones, Wendy, 485 Joshi, Aravind K., 96, 343, 417, 419, 420, 422, 424, 425, 427, 429–431, 436–438, 441–443, 634, 655 Jungen, Oliver, x Jurafsky, Daniel, 317, 570 Kahane, Sylvain, x, 369, 371, 374, 376, 383, 389, 393, 394, 404, 414, 422 Kallmeyer, Laura, x, 317, 320, 419, 420, 422, 424, 441–443, 572, 641–643, 681 Kamp, Hans, 229 Kanerva, Jonni M., 234, 235 Kaplan, Ronald M., 123, 125, 223, 224, 228, 233, 236, 237, 240, 242, 244, 309, 328, 458, 474, 513, 551, 572, 585, 687 Karimi-Doostan, Gholamhossein, 148 Karimi, Simin, 148 Karmiloff-Smith, Annette, 481, 485, 486, 509 Karttunen, Lauri, 238, 247 Kasper, Robert T., 104, 193, 299, 326, 403, 442, 560

Kasper, Walter, 177, 267, 268, 363 Kathol, Andreas, 48, 178, 199, 293, 299, 307, 343, 351, 358, 402, 406, 651, 662, 685 Kaufmann, Ingrid, 614 Kaufmann, Tobias, 267, 268, 705 Kay, Martin, 120, 122, 370, 705 Kay, Paul, x, 315–319, 327–331, 335, 342, 366, 497, 566, 605, 614, 618, 642, 644 Kayne, Richard S., 147, 148, 161, 162, 175, 301, 689, 699 Keenan, Edward L., 158, 163, 290 Keil, Martina, 565 Keller, Frank, 172, 178, 268, 357, 359, 514 Kempen, Masja, 543, 544 Kern, Franz, 369 Kern, Sabine, 523 Kettunen, Kimmo, 371 Keyser, Samuel Jay, 133, 625 Khlentzos, Drew, 460, 461, 469 Kibort, Anna, 619 Kiefer, Bernd, 267, 365, 442 Kifle, Nazareth Amlesom, 224 Kim, Jong-Bok, 4, 125, 268, 301, 472, 473, 487 Kim, Nari, 420 Kimball, John P., 486, 491 King, Paul, 207, 221, 270, 474, 513, 659 King, Tracy Holloway, 223, 224, 244, 309 Kinyon, Alexandra, 223, 320, 420, 430, 642 Kiparsky, Carol, 468 Kiparsky, Paul, 468 Kiss, Tibor, x, 102, 115, 116, 119, 255, 268, 290, 457, 464, 597, 685, 709 Klann-Delius, Gisela, x, 550 Klein, Ewan, 120, 163, 183, 184, 186, 190, 191, 195, 205, 247, 271, 323, 338, 511, 515, 572 Klein, Wolfgang, 161, 290, 370, 387, 479, 480, 490 Klenk, Ursula, 85, 86 Kliegl, Reinhold, 523 Kluender, Robert, 469, 479 Knecht, Laura, 323 Kobele, Gregory M., 172, 178 Koenig, Jean-Pierre, xvii–xix, 286, 312, 320, 322, 364, 642, 647, 673, 687, 696 Kohl, Dieter, 223

Kohl, Karen T., 536 Koit, Mare, 371 Kolb, Hans-Peter, 120–123, 144 Koller, Alexander, 419 Komarova, Natalia L., 4, 472, 534 Konieczny, Lars, 355, 359 König, Esther, 247, 572 Koopman, Hilda, 101 Kordoni, Valia, 268, 320, 642 Kornai, András, 77, 120, 456, 474 Kornfilt, Jaklin, 101, 117 Koster, Jan, 102, 104, 113, 117, 309, 463, 464 Kouylekov, Milen, 267 Kratzer, Angelika, 117, 552, 562, 625–628 Krieger, Hans-Ulrich, 267, 324, 365, 616, 643 Krifka, Manfred, 479 Kroch, Anthony S., 419, 429, 443, 572, 634 Kropp Dakubu, Mary Esther, 268 Kruijff, Geert-Jan M., 580 Kruijff-Korbayová, Ivana, 225 Kübler, Sandra, 370 Kubota, Yusuke, 262, 265 Kuhn, Jonas, x, 125, 180, 223, 233, 234, 245, 351, 531, 532, 542 Kuhns, Robert J., 120 Kunze, Jürgen, 370, 383, 394, 633 Kuperberg, Gina, 660 Kupść, Anna, 268, 312 Kutas, Marta, 469 Labelle, Marie, 524 Laczkó, Tibor, 224 Laenzlinger, Christoph, 143, 147, 149, 153, 301 Lai, Cecilia S. L., 485 Lai, Zona, 485 Lakoff, George, 315, 512 Lamping, John, 229, 377, 687 Langacker, Ronald W., 315, 345, 601, 602 Lappin, Shalom, 120, 144, 466–468 Lareau, François, 223 Larson, Richard K., 111, 133, 134, 160, 166, 571, 591 Lascarides, Alex, 587 Lasch, Alexander, 645 Laskri, Mohamed Tayeb, 267 Lasnik, Howard, 495, 530 Lavoie, Benoit, 371

Le, Hong Phuong, 420 Lee-Goldmann, Russell R., 79 Legate, Julie, 495–497 Lehtola, Aarno, 371 Leiss, Elisabeth, 4, 408, 521, 569 Lenerz, Jürgen, 12, 112, 117, 119, 142, 595 Lenneberg, Eric H., 480, 481 Levelt, Willem J. M., 246 Levin, Beth, 634 Levine, Robert D., x, 144, 312, 360, 363, 390, 391, 571, 572 Levinson, Stephen C., 455, 458, 461, 470, 471, 477, 701 Levy, Leon S., 419, 443 Lewin, Ian, 120–122 Lewis, Geoffrey L., 323, 642 Lewis, John D., 494, 498 Lewis, Richard L., 523 Li, Charles N., 675 Li, Wei, 268 Liakata, Maria, 224 Lichte, Timm, x, 317, 419, 427, 441, 681 Lichtenberger, Liz, 485 Lieb, Hans-Heinrich, x Lieven, Elena, 317, 550, 570 Lightfoot, David W., 120, 543 Lin, Dekang, 121, 123 Lin, Francis Y., 4 Link, Godehard, 457 Lipenkova, Janna, 268, 367, 676, 694 Liu, Gang, 268 Liu, Haitao, x, 371 Lloré, F. Xavier, 247 Lobin, Henning, 265, 370, 417 Löbner, Sebastian, 571 Lødrup, Helge, 37, 458 Lohndal, Terje, 625, 626, 634 Lohnstein, Horst, x, 97, 113, 121, 123, 125, 524, 571, 592 Longobardi, Giuseppe, 479 Lorenz, Konrad, 480 Lötscher, Andreas, 395, 567 Loukam, Mourad, 267 Lüdeling, Anke, x, 664 Luuk, Erkki, 473, 474, 478 Luuk, Hendrik, 473, 474, 478

Maas, Heinz Dieter, 371 Maché, Jakob, x, 671, 675 Machicao y Priemer, Antonio, 81, 268, 313, 694 Mack, Jennifer, 660 Mackie, Lisa, 224 MacWhinney, Brian, 493 Maess, Burkhard, 484 Maier, Wolfgang, 419 Maling, Joan, 119, 193, 235, 292, 632 Malouf, Robert, 170, 268, 288, 307, 326, 353, 385, 391, 571, 572, 586, 647, 687 Manandhar, Suresh, 267 Manshadi, Mehdi Hafezi, 183 Marantz, Alec, 155, 524, 527, 625, 627, 629, 630, 671 Marciniak, Małgorzata, 268 Marcus, Gary F., 484–486, 507 Marcus, Mitchell P., 120, 121 Marimon, Montserrat, 269 Marshall, Ian, 269 Marslen-Wilson, William D., 526 Martinet, André, 472<sup>25</sup> Martner, Theodore S., 120 Martorell, Jordi, 4 Masuichi, Hiroshi, 224 Masum, Mahmudul Hasan, 267 Matiasek, Johannes, 109, 245, 633 Matsumoto, Yuji, 370 Maxwell III, John T., 223 Mayo, Bruce, 223, 224 McCawley, James D., 469 McCloskey, James, 469 Mchombo, Sam A., 233, 614 McIntyre, Andrew, x McKean, Kathryn Ojemann, 523 Meinunger, André, 97, 117, 143, 147, 585 Meisel, Jürgen M., 455, 480, 482, 533, 550 Mel'čuk, Igor A., 369–371, 377, 411 Melnik, Nurit, 267, 268 Mensching, Guido, x, xi, 541 Menzel, Wolfgang, 371 Metcalf, Vanessa, 268 Meurer, Paul, 223, 224 Meurers, Detmar, 17, 102, 119, 171, 267, 268, 292, 293, 299, 310, 312, 349, 360, 363, 407, 438, 465, 564, 567, 587,

651, 685, 705 Meza, Ivan, 269 Micelli, Vanessa, 317, 324 Michaelis, Jens, x, 164, 167, 551, 552 Michaelis, Laura A., x, 245, 316, 320, 330, 331, 367, 573, 642, 703 Michelson, Karin, 364, 696 Miller, George A., 436, 521, 523 Miller, Philip, 685 Mineur, Anne-Marie, 267 Mistica, Meladel, 224 Mittendorf, Ingo, 224 Miyao, Yusuke, 269, 442 Moeljadi, David, 268 Moens, Marc, 704 Mohanan, KP, 37, 458 Mohanan, Tara, 37, 458 Mok, Eva, 316 Momma, Stefan, 223 Monachesi, Paola, 685 Montague, Richard, 190 Moortgat, Michael, 248 Moot, Richard, 247 Morgan, James L., 490 Morin, Yves Ch., 120 Morrill, Glyn, 247, 248, 261 Moshier, Andrew M., 322 Motazedi, Yasaman, 223 Muischnek, Kadri, 371 Mukai, Kuniaki, 284 Müller, Gereon, x, 94, 117, 119, 143, 171, 174, 419, 671–673, 680 Müller, Max, 478 Müller, Natascha, 480 Müller, Stefan, ix, xiii, xv–xix, 6, 15, 29, 34, 42, 52, 81, 102, 110, 113, 119, 150, 156–158, 160, 163, 164, 167, 168, 170–173, 177–182, 221, 233, 245, 246, 253, 263, 267–269, 275, 278, 285, 289, 290, 292–294, 297, 299, 300, 307, 309–313, 315–317, 322– 324, 328, 333, 338, 340, 341, 346– 348, 351, 357, 358, 360, 363–365, 367, 399–402, 406, 414, 415, 430, 437, 440, 451, 458, 461, 463, 465, 473, 477, 518, 519, 546, 552, 564, 567, 571, 585–588, 596, 602, 604,

607, 610, 614–617, 619, 622, 623, 625, 629, 633, 638, 639, 644, 645, 647–652, 658–664, 675, 676, 681, 683, 685, 687, 688, 692, 694, 699, 705, 709, 720 Muraki, Kazunori, 370, 371 Musso, Mariacristina, 484, 699 Müürisep, Kaili, 371 Muysken, Pieter, 76, 162 Mykowiecka, Agnieszka, 268 Nakayama, Mineharu, 498, 505 Nakazawa, Tsuneko, 119, 171, 178, 293, 311, 378, 438, 587, 596, 599, 639, 685 Nanni, Debbie L., 263 Nasr, Alexis, 394 Naumann, Sven, 183 Nederhof, Mark-Jan, 365 Neeleman, Ad, 316 Nelimarkka, Esa, 371 Nerbonne, John, 200, 202, 284, 308, 324, 458, 580, 616, 643 Netter, Klaus, 29, 81, 102, 103, 219, 253, 254, 267, 268, 308, 437, 442, 698, 708 Neu, Julia, 268 Neumann, Günter, 267 Neville, Anne, 267 Nevins, Andrew Ira, 474 Newmeyer, Frederick J., 455, 456, 462, 471, 479, 534, 542, 543, 571, 698 Newport, Elissa L., 480, 481, 483 Ng, Say Kiat, 268 Nguyen, Thi Minh Huyen, 420 Niño, María-Eugenia, 224 Nivre, Joakim, 370 Niyogi, Partha, 4, 472, 534–536 Noh, Bokyung, 587, 614, 639 Nøklestad, Anders, 371 Nolda, Andreas, x Nomura, Hirosato, 371 Noonan, Michael, 323, 642 Nordgård, Torbjørn, xi, 121 Nordhoff, Sebastian, x, xv, 617 Nordlinger, Rachel, 224, 310 Nowak, Martin A., 4, 472, 534 Noyer, Rolf, 629 Nozohoor-Farshi, R., 121

Nunberg, Geoffrey, 542, 605, 628, 629, 644, 662 Nunes, Jairo, 151, 171, 180, 473, 474 Nygaard, Lars, 371 Nykiel, Joanna, 487 O'Connor, Mary Catherine, 315 O'Donovan, Ruth, 224 O'Neill, Michael, 4, 473 Ochs, Elinor, 498, 507 Odom, Penelope, 523 Oepen, Stephan, 119, 267, 269 Oflazer, Kemal, 224 Ohkuma, Tomoko, 224 Oliva, Karel, 102, 297, 402, 410, 417 Oppenrieder, Wilhelm, 117, 596 Orgun, Cemil Orhan, 271 Ørsnes, Bjarne, x, 104, 106, 164, 171, 223, 267, 269, 694 Osborne, Miles, 248 Osborne, Timothy, x, xvii, 52, 158, 199, 378, 381, 383, 393, 394, 400, 411, 413, 414, 417, 573, 597, 598, 662 Osenova, Petya, 267 Osswald, Rainer, 320, 641–643 Ott, Dennis, 158 Özkaragöz, İnci, 323, 642 Packard, Woodley, 267 Paczynski, Martin, 660 Pafel, Jürgen, 48, 552 Paggio, Patrizia, 267, 351 Palmer, Alexis, 247, 248 Palmer, Martha, 420, 683 Pankau, Andreas, x, xi Pankau, Rainer, 485 Parisi, Domenico, 481, 509 Parmentier, Yannick, 419 Partee, Barbara H., 588 Patejuk, Agnieszka, 224 Paul, Hermann, 46 Paul, Soma, 267 Penn, Gerald, 267, 360 Pentheroudakis, Joseph, 223, 224 Perchonock, Ellen, 523 Pereira, Fernando, 342 Perles, Micha A., 571, 573

Perlmutter, David M., x, 93, 119, 140, 289, 457, 648 Perry, John, 284 Pesetsky, David, 474, 629, 631, 699 Peters, Stanley, 86 Petrick, Stanley Roy, 120 Pfister, Beat, 268, 705 Phillips, Colin, 359, 523, 529 Phillips, John D., 183 Piantadosi, Steven T., 474 Piattelli-Palmarini, Massimo, 494 Pickering, Martin, 265 Pienemann, Manfred, 246 Pietroski, Paul, 176, 493 Pietsch, Christian, x Pihko, Elina, 663 Piñango, Maria Mercedes, 660 Pine, Julian M., 499, 505, 543–545 Pineda, Luis, 269 Pinker, Steven, 86, 144, 246, 453–455, 458, 470, 471, 473, 478, 484, 670 Pittner, Karin, 290 Plainfossé, Agnes, 223 Plank, Frans, 453<sup>4</sup> , 453 Plath, Warren J., 120, 122 Plunkett, Kim, 481, 509 Poletto, Cecilia, 147, 153 Pollack, Bary W., 120 Pollard, Carl, 29, 34, 156, 169–171, 205, 207, 212, 219, 258–261, 267, 269, 272, 273, 276, 278, 284–286, 288, 293, 301, 304, 308, 309, 311, 319, 322, 323, 331, 332, 338, 340, 341, 358, 402, 414, 424, 474, 531, 540, 551, 558, 563, 564, 566, 567, 572, 578, 580, 586, 633, 659 Pollock, Jean-Yves, 100, 146, 148, 233 Popowich, Fred, 267 Porzel, Robert, 342 Postal, Paul M., 120, 474, 571 Poulson, Laurie, 269 Preuss, Susanne, 183, 193, 195 Prince, Alan, x Przepiórkowski, Adam, 119, 221, 224, 246, 268, 292, 310, 312, 564, 567 Pullum, Geoffrey K., x, xiii, 4, 77, 86, 93, 120, 158, 163, 183, 184, 186, 187, 190, 191,

195, 203, 205, 290, 323, 419, 456, 473–475, 477, 486, 488–490, 492, 494, 497, 500, 508, 510–515, 519, 533, 572 Pulman, Stephen G., 528 Pulvermüller, Friedemann, 4, 498, 510, 660, 661, 663, 664 Puolakainen, Tiina, 371 Putnam, Michael, 158, 417 Quaglia, Stefano, 224 Radford, Andrew, 150, 180, 543 Rahman, M. Sohel, 267 Rahman, Md. Mizanur, 267 Rákosi, György, 224 Rambow, Owen, 96, 223, 371, 394, 417, 419, 420, 425, 427, 430, 431, 435, 436, 441–443, 572, 683 Ramchand, Gillian, 588 Randriamasimanana, Charles, 224 Raposo, Eduardo, 496, 533 Rappaport Hovav, Malka, 630, 634 Rauh, Gisa, 156 Rawlins, Kyle, 477 Reape, Mike, 96, 178, 179, 223, 300, 307, 327, 343, 351, 358, 365, 552, 662 Redington, Martin, 506 Reis, Marga, 35, 36, 43, 49, 51, 102, 104, 117, 459 Remberger, Eva-Maria, 496, 541 Resnik, Philip, 420 Reyle, Uwe, 223, 229, 580 Rhomieux, Russell, 79 Richards, Marc, 129, 144, 145, 524, 527 Richter, Frank, x, 181, 207, 221, 267, 284, 307, 333, 335, 337, 341, 360, 363, 394, 474, 564, 568, 701 Rieder, Sibylle, 267, 268 Riemer, Beate, 480 Riezler, Stefan, 223, 224 Ritchie, Robert W., 86 Rizzi, Luigi, 145, 146, 150, 153, 175, 460, 462, 470, 471, 477, 533, 700 Roberts, Ian, 479, 534 Robins, Robert Henry, x Robinson, Jane, 342 Rodrigues, Cilene, 474

Rogers, James, 123, 474, 513 Rohrer, Christian, 223, 224 Romero, Maribel, 443 Roosmaa, Tiit, 371 Rosen, Carol G., x Rosén, Victoria, 224 Ross, John Robert, 119, 178, 201, 260, 354, 457, 462, 466 Roth, Sebastian, 224 Rothkegel, Annely, 370 Roussanaly, Azim, 420 Rowland, Caroline F., 499, 505 Ruppenhofer, Josef, 245, 316, 320, 330, 573, 642, 703 Sabel, Joachim, 552 Sadler, Louisa, 224 Sáfár, Éva, 269 Safir, Kenneth J., 571 Sag, Ivan A., x, xv–xvii, 27, 29, 120, 122, 153, 156, 162, 163, 169–172, 177, 178, 183, 184, 186, 190, 191, 195, 205, 207, 212, 219, 260, 262, 267, 268, 271–273, 276, 278, 284–286, 288, 301, 304, 307–309, 316, 319, 322, 323, 326, 331, 332, 338–343, 351, 353, 359, 362, 366, 367, 385, 391, 404, 413–415, 424, 469, 487, 497, 499, 511, 523, 526, 528, 529, 531, 532, 540, 542, 558, 563, 564, 566, 567, 571, 572, 580, 586, 614, 618, 628, 629, 633, 644, 647, 651, 656, 659, 662, 664, 676–678, 685, 687 Sagot, Benoît, 223 Sailer, Manfred, 284, 307, 333, 335, 341, 564, 565, 568 Saito, Mamoru, 530 Samarin, William J., 483 Sameti, Hossein, 183 Sampson, Geoffrey, 479, 480, 494 Samvelian, Pollet, xvii, 457, 685 Saraswat, Vijay, 229, 377, 687 Sarkar, Anoop, 343, 419, 655 Sato, Yo, 267, 307 Sauerland, Uli, x, 116, 176, 531, 703 Savin, Harris B., 523 Schabes, Yves, 420, 443, 565, 566

Schäfer, Roland, x, xii, 252<sup>4</sup> Scheffler, Tatjana, 420, 430 Schein, Barry, 625, 626 Schenkel, Wolfgang, 369 Scherpenisse, Wim, 108, 117, 199, 382 Schieffelin, Bambi B., 498, 507 Schlesewsky, Matthias, 523 Schluter, Natalie, 224 Schmidt, Paul, 267, 268 Scholz, Barbara C., 4, 86, 473–475, 486, 489, 490, 492, 494, 497, 500, 508, 510, 512–515, 519, 533 Schröder, Ingo, 371 Schubert, Klaus, 370 Schumacher, Helmut, 369 Schütz, Jörg, 267 Schwarze, Christoph, x, 223, 246 Segond, Frédérique, 224 Seiffert, Roland, 267 Seiss, Melanie, 224 Sells, Peter, 4, 125, 268, 472, 473 Sengupta, Probal, 223 Seuren, Pieter A. M., 144, 483 Sgall, Petr, 369 Shamir, Eliahu, 571, 573 Shieber, Stuart M., 203, 207, 221, 342, 359, 419, 443, 477, 528, 529, 551 Shtyrov, Yury, 660, 661, 663, 664 Siegel, Melanie, 177, 268 Simov, Alexander, 267 Simov, Kiril, 267 Simpson, Jane, 224, 316, 567, 614 Singleton, Jenny L., 483 Slayden, Glenn C., 267 Sleator, Daniel D. K., 370, 371 Smith, Carlota S., 630, 631 Smolensky, Paul, x Snedeker, Jesse, 660 Snider, Neal, 466, 469 Snyder, William, 533 Soehn, Jan-Philipp, 341 Son, Minjeong, 534, 541 Song, Sanghoun, 267, 268 Sorace, Antonella, 514 Spackman, Stephen P., 267 Speas, Margaret, 101 Spencer, Andrew, 571

Speyer, Augustin, 147 Sportiche, Dominique, 101, 111, 120 Srinivas, Bangalore, 419 Stabler, Edward, 120, 121, 145, 164, 165, 167, 175–179, 311, 460, 528, 530, 531, 551, 568, 703 Städing, Gabriele, 485 Stanojevic, Milos, 177 Stark, Elisabeth, xi Starosta, Stanley, 371, 572 Stearns, Laura, 474 Stede, Manfred, 247 Steedman, Mark, 160, 161, 163, 168, 177, 225, 247–249, 253, 255–260, 265, 301, 325, 515, 528, 531, 653, 654, 666, 705 Steels, Luc, 316, 317, 346, 349, 350, 365 Stefanowitsch, Anatol, 367, 508, 664, 667 Steinbach, Markus, 12, 117, 552 Sternefeld, Wolfgang, x, 83, 98, 101, 117, 125, 145, 147, 150, 151, 181, 236, 394, 458, 527, 571, 701 Stiebels, Barbara, 661 Stillings, Justine T., 263 Stowell, Timothy, 147, 467 Strecker, Bruno, 247 Strunk, Jan, x, 466 Suckow, Katja, 523 Sulger, Sebastian, 224 Svenonius, Peter, 148, 534, 541 Tabbert, Eric, 247 Takahashi, Masako, 419, 443 Takami, Ken-ichi, 468 Tanenhaus, Michael K., 526, 527, 657 Temperley, Davy, 370, 371 ten Hacken, Pius, 123, 516 Tesnière, Lucien, 31, 369, 376, 386, 388, 393, 414, 417 Theofilidis, Axel, 267 Thiersch, Craig L., 102, 117, 120–123 Thomas, James, 522 Thompson, Henry S., 125, 183, 192 Thompson, Sandra A., 675 Thompson, William, 545 Thornton, Rosalind, 460, 461, 469 Thráinsson, Höskuldur, 193, 235, 632

Timberlake, Alan, 323, 642 Toivonen, Ida, xv, 244, 309, 316, 613–615, 619, 620, 622, 623, 681, 687 Tomasello, Michael, 86, 118, 317, 367, 453, 471, 477, 482, 485, 490, 507, 510, 543, 545, 546, 550, 570, 601–604, 607, 699 Torisawa, Kentaro, 442 Torr, John, 177–179 Tóth, Ágoston, 224 Travis, Lisa, 455 Trinh, Tue H., 524 Trosterud, Trond, 371 Tseng, Jesse, 268, 289 Tsujii, Jun'ichi, 269, 442 Turpin, Myfany, 223 Tyson, Mabry, 342 Uibo, Heli, 371 Ulinski, Morgan, 223 Umemoto, Hiroshi, 224 Uriagereka, Juan, 495, 496, 533 Uszkoreit, Hans, 96, 184, 192, 196, 203, 205, 247, 255, 267, 342, 424, 436, 465, 522, 572 Vaidya, Ashwini, 683, 684 Valian, Virginia, 537–539 Valkonen, K., 371 Vallduví, Enric, 351 Van Eynde, Frank, 81, 341 van Genabith, Josef, 224, 371 Van Langendonck, Willy, 29, 372 van Noord, Gertjan, 168, 247, 268, 299, 326, 513, 596 van Riemsdijk, Henk, 95, 158, 471 van Trijp, Remi, x, 346–353, 355, 357–360, 366, 619 Van Valin Jr., Robert D., x, 468 Vancoppenolle, Jean, 247 Vargha-Khadem, Faraneh, 485 Vasishth, Shravan, x, 523 Vater, Heinz, 83 Veenstra, Mettina Jolanda Arnoldina, 120, 121, 123, 166, 176, 207, 460, 513, 572, 703 Velupillai, Viveka, 470 Vennemann, Theo, 29, 104, 161

Verhagen, Arie, 569 Verspoor, Cornelia Maria, 587, 614 Vierhuff, Tilman, 247 Vijay-Shanker, K., 425, 431, 442 Villavicencio, Aline, 247, 248, 255, 265, 471, 497, 704 Vogel, Carl, 267 Vogel, Ralf, 117, 158, 552 Volk, Martin, 183 von Stechow, Arnim, 83, 98, 117, 125, 147, 247, 571, 579, 585 Voutilainen, Atro, 371 Wada, Hajime, 223 Wahlster, Wolfgang, 177, 269 Walker, Donald E., 120 Walther, Markus, 271, 338 Wasow, Thomas, x, 27, 212, 267, 269, 285, 311, 331, 332, 359, 523, 526, 528, 529, 532, 542, 572, 628, 629, 644, 662 Webelhuth, Gert, 113, 117, 118, 147, 540, 678, 690, 705 Weber, Heinz J., 370, 387, 392 Wechsler, Stephen, x, 156, 160, 167, 246, 286, 316, 324, 347, 357, 367, 546, 587, 602, 612–615, 622, 623, 628–631, 639, 681 Wedekind, Jürgen, 223, 687 Wegener, Heide, 439 Weir, David, 425 Weir, Morton W., 483 Weissgerber, Monika, 370 Weisweber, Wilhelm, 183, 193, 195 Welke, Klaus, xvii, 324, 325, 370, 372, 691 Wells, Rulon S., 343, 662 Werner, Edeltraud, 370 Wesche, Birgit, 102, 255 Wetta, Andrew C., 199, 307, 651, 662 Wexler, Kenneth, 161, 454, 534–536, 543 Weydt, Harald, 472 Wharton, R. M., 489 White, Mike, 247 Wiese, Heike, 660 Wijnen, Frank, 543, 544 Wiklund, Anna-Lena, 147 Wilcock, Graham, 351 Wilder, Chris, 117

#### Name index

Wiley, Edward, 481 Williams, Edwin, 201, 245, 458 Wing, Ben, 247, 248 Winkler, Susanne, 117, 310, 350, 568 Wintner, Shuly, 267, 268 Wittenberg, Eva, x, 660 Wöllstein, Angelika, 43, 48 Wood, Randall, 4, 473 Wunderlich, Dieter, 86, 316, 470, 571, 573, 614 Wurmbrand, Susanne, 174, 596 Xia, Fei, 419 Yamada, Hiroyasu, 370

Yamada, Jeni, 485 Yampol, Todd, 247 Yang, Charles D., 495–497, 536, 541 Yang, Chunlei, 268 Yang, Jaehyung, 268 Yankama, Beracah, 176, 493 Yasukawa, Hidekl, 223 Yatsushiro, Kazuko, 116 Yip, Moira, 119, 292 Yoon, Juntae, 420 Yoon, SinWon, 419, 420, 430 Yoshinaga, Naoki, 442 Zaenen, Annie, 37, 193, 223, 233, 235, 236, 240, 242, 328, 572, 632 Zalila, Ines, 267 Zappa, Frank, 452<sup>3</sup> Zeevat, Henk, 247 Zhang, Yi, 269 Ziehe, T. W., 370, 371 Ziem, Alexander, 645 Zifonun, Gisela, 247 Živanović, Sašo, xii, xvi Zribi, Chiraz, 419 Zucchi, Alessandro, 612 Zwart, C. Jan-Wouter, 148 Zweigenbaum, Pierre, 223

Zwicky, Arnold M., 120, 122

# **Language index**

Abaza, 461 Akan, 466 Akkadian, 475 Arabic, 103, 223, 248, 267, 419, 572 Arrernte, 223 Bambara, 477 Basque, 372, 534, 579, 585 Bengali, 223, 267 Bulgarian, 267 Cantonese, 267 Catalan, 372 Chamorro, 307 Czech, 534 Danish, 102, 104, 223, 267, 269, 370, 371, 466, 469, 496 Dutch, 11322, 113, 148, 174, 247, 253, 268, 370, 46313, 543–545, 596, 632, 639, 650 Dyirbal, 455, 475 English, 3, 27, 6812, 74, 85, 87, 96–98, 102, 111, 113, 118, 152, 177, 191, 195, 223, 225–229, 232–233, 247, 255, 268, 29717, 301, 31126, 317, 335, 370, 371, 40632, 409, 414, 419, 430, 454, 461, 463, 469, 481, 491–500, 506–508, 535, 537, 540, 543–545, 564, 567, 570, 571, 585, 594, 625, 653, 670, 673, 676–679, 691, 694 Esperanto, 268, 370, 371, 386 Estonian, 371 Ewe, 307 Faroese, 371 Finnish, 23810, 247, 371, 372 French, xvii11, 27, 102, 223, 246, 247, 268, 269, 307, 370, 371, 392, 419, 507, 694

Galician, 496, 533<sup>1</sup> Georgian, 223, 268 German, 177, 269, 370–372, 506, 545, 694 Germanic, 100 Greek, 268 Guugu Yimidhirr, 461 Hausa, 268 Hawaiian Creole English, 483 Hebrew, 268, 534 Hindi, 224, 683 Hixkaryána, 475 Hungarian, 151, 224, 455 Icelandic, 37, 307, 596 Old, 371 Indonesian, 224, 268 Irish, 224, 307, 371, 642 Italian, 147, 153, 224, 370, 372, 419, 484, 496, 533<sup>1</sup> , 537 Jakaltek, 455 Japanese, 87, 11626, 116, 118, 174, 177, 224, 255, 268, 301, 370, 371, 454, 466, 484, 534 Javanese, 534 Jiwarli, 455 Kikuyu, 307 Korean, 224, 268, 271, 301, 420, 466, 534 Latin, 3, 255, 371 Lithuanian, 642 Malagasy, 224 Malayalam, 455, 534 Maltese, 268, 269, 694 Mandarin Chinese, 224, 268, 269, 371, 470, 481, 506, 537, 675–676, 694 Moore, 307

Murrinh-Patha, 224 Norwegian, 224, 268, 370, 371, 466, 497 Oneida, 696 Palauan, 307 Persian, 268, 269, 454, 685, 694 Pirahã, 474 Polish, 224, 268 Portuguese, 224, 268, 370, 371 Proto-Uralic, 475 Romance, 685 Russian, 23810, 269, 370, 371 Sahaptin, 269 Sami, 372 sign language, 482–483 American (ASL), 481, 483, 534 British, 269 French, 269 German, 269 Greek, 269 South African, 269 Slavic, 369 Sorbian Lower, 496, 533<sup>1</sup> Upper, 496, 533<sup>1</sup> Sotho, Northern, 224 Spanish, 154, 224, 269, 307, 327, 370, 371, 481, 544–545, 694 Straits Salish, 470 Swahili, 371 Swedish, 370, 372, 466, 61420, 623 Swiss German, 477 Tagalog, 247, 255, 455 Tamil, 466 Thompson Salish, 307 Tigrinya, 224 Turkish, 224, 247, 255, 269, 323–324, 642 Urdu, 224, 683 Vietnamese, 420 Wambaya, 269, 310, 455, 566

Warlpiri, 455, 475 Welsh, 103, 224, 269, 40632, 572 Wolof, 224 Yiddish, 269, 307, 420, 694

# **Subject index**

!, 340 +, 415, 680 \, 249, 255 ⃝, 404 ↓, 229, 261, 420 ∃, 90 ∀, 90 ⊕, 277 →, 53 ↑, 229, 257, 261 ↑↑, 259 ∨, 213 |, 239, 255 \*, 9<sup>2</sup> , 64, 75, 136, 192, 236, 242, 420 /, 197, 248–249, 255 #, 9<sup>2</sup> §, 9<sup>2</sup> ⇒, 277, 282 −◦, 230 a-structure, 235 accent, 271 accomplishment, 606 acquisition, 5, 480–481, 486–509, 531, 533– 549, 691 second language, 480 speed, 479–480 Across the Board Extraction, 171, 178, 201, 354, 662<sup>53</sup> actant, 31, 374 activity, 606 adjacency, 55 adjective, 18–20, 24, 93, 287, 470 depictive, 40 predicative, 40 adjunct, 30–34, 63–65, 75, 91, 102, 154, 191– 193, 232–233, 251, 278, 287–289, 319, 420–421, 467, 559 head, 76 adjunction, 421

ban, 566 obligatory, 430, 432–434 adposition, 22, 470 adverb, 18, 22, 93 pronominal-, 24 relative, 25 adverbial, 3415, 39–40 adverbs, 22 affix hopping, 100 agent, 30, 92, 235 Agree, 131, 152, 153, 176, 562, 595<sup>1</sup> agreement, 5, 35, 56–58, 151, 152, 176, 213, 377, 389 object, 147, 579, 585 ambiguity, 437, 439 spurious, 71, 77, 519, 624 analogy, 506 animacy, 92 antipassive, 671–674 apparent multiple fronting, 199<sup>14</sup> Arc Pair Grammar, 474 argument, 30–34, 91, 248, 285, 467 designated, 245, 289, 320<sup>4</sup> , 703 external, 92, 101, 245 internal, 92, 245 position, 92 argument attraction, 685 argument composition, 685 argument structure, 91 article, 18, 53 aspect, 606 attribute-value matrix (AVM), 207 attribute-value structure, 207 automaton finite, 507 auxiliary inversion, 97, 121, 172, 29717, 493– 507, 543 back-formation, 103

backward application, 168, 249

base generation, 116 benefactive, 619–625, 687 beneficiary, 92, 235, 236 -reduction, 61, 250 bias, 543 binary, 55 Binding Theory, 90, 144, 285, 29015, 460– 462, 558, 559, 571 Biolinguistics, 359 bit vector, 365 blocking, 508 bounding node, 462 branching, 55 binary, 55, 160, 47527, 501, 506, 557– 560, 636, 697 unary, 55 Broca's area, 483 Burzio's Generalization, 113, 671 c-command, 138, 30121, 460 c-structure, 193<sup>2</sup> , 225, 229 capacity generative, 164, 192, 203, 442, 551–555 cartography, 145<sup>8</sup> , 470 case, 20, 24, 33, 58, 245, 284, 310, 438 absolutive, 693 accusative, 20, 38, 40, 42, 109, 693 agreement, 42, 109<sup>19</sup> dative, 20, 110 ergative, 693 filter, 110 genitive, 20, 38, 109, 110, 293, 569 lexical, 109–110, 235 nominative, 20, 35, 109, 693 semantic, 40 structural, 109–110, 458 Case Theory, 144, 245 Categorial Grammar, 179 Categorial Grammar (CG), 164–173, 177, 196, 203, 205, 247–264, 271, 294, 297<sup>17</sup> , 308, 309, 311, 316, 319, 369, 378, 438, 443, 457, 458, 475, 497, 511, 513, 515, 523, 527, 529, 53114, 558, 572, 580, 585, 588<sup>5</sup> , 683, 705 category functional, 93, 145, 154, 175, 470 , 147

Adverb, 147, 152 Agr, 234 AgrA, 147 AgrIO, 147, 458 AgrN, 147 AgrO, 147, 458, 579, 585 AgrS, 147, 458, 579 AgrV, 147 Asp, 147, 148 Aux, 147 Benefactive, 147 C, 95, 470 Clitic Voices, 147 Color, 147<sup>11</sup> D, 95 Dist, 147, 148 Fin, 145 Foc, 145, 153 Force, 145 Gender, 147 Hearer, 147, 153 Honorific, 147 I, 95–101, 110, 470 Intra, 148 Kontr, 153 Mod, 147 Mood, 147 Nationality, 147<sup>11</sup> Neg, 147, 148, 234, 396 Number, 147 Obj, 148 OuterTop, 147 Pass, 593 Passive, 141 PassP, 591 PathP, 148 Perf, 593 Perfect(?), 147 Person, 147 PlaceP, 148 −Pol, 147 +Pol, 147 %Pol, 147 Predicate, 147 Quality, 147<sup>11</sup> Shape, 147<sup>11</sup> Share, 147

Size, 147<sup>11</sup> Speaker, 147, 153 Subj, 148 T, 95, 147, 234, 579 Tense, 135, 147 Top, 145, 148, 153 Tra, 148 *v*, 132–134, 591, 626–629 Voice, 147, 562, 579 Z, 147 lexical, 27, 93, 470 syntactic, 27 causative construction, 324 CAUSE, 579 change of state, 606 CHILDES, 493, 494, 49436, 496, 506 chimpanzee, 481 classifier, 471 clitic, 544 CoGETI, 269 Cognitive Grammar, 315, 452, 497, 545, 573 coherence, 237, 241, 244 coindexation, 284 Colocant, 414 comparative, 21, 24 competence, 358–359, 435–441, 473, 521– 532, 537, 548 complement, 34, 75, 92, 160–165, 278 complementizer, 105, 255, 295 completeness, 237, 241, 244 Complex NP Constraint, 462, 466 complex predicates, xvii<sup>11</sup> complexity class, 85, 419, 507, 551–555 composition, 256–257, 491 backward, 256, 654 forward, 256 compositionality, 286, 479 computer science, 120 configurationality, 117 conjunction, 22, 53, 155<sup>16</sup> constituent, 7, 13 discontinuous, 44, 96, 326–327, 343, 656 constituent order, 117, 188, 237, 239, 255–301, 327, 424–427 fixed, 255, 301 free, 255, 301

Constraint Language for Lambda-Structures (CLLS), 579 constraint-based grammar, 83<sup>1</sup> , 11626, 122, 125, 177, 180, 243, 474, 499, 511– 519, 523, 524<sup>4</sup> , 529, 52911, 530, 53013, 703–705 construction Active Ditransitive, 344 Caused-Motion, 349, 351, 608, 613, 623, 661 Determiner Noun, 342 linking, 319 N-P-N, 415–416, 419<sup>2</sup> , 679–681 passive, 320 resultative, 316, 570, 587–588, 615–639 subject, 320 transitive, 320 verb phrase, 318 Construction Grammar (CxG), 96, 117, 156, 172, 173, 17640, 196, 202, 203, 23810, 244, 245, 271, 283, 286, 28912, 309, 315–367, 419, 452, 497, 523<sup>3</sup> , 529, 53114, 545–548, 558, 566, 569, 573, 593, 602, 61623, 650, 681, 703, 705 Fluid, 324, 409 Sign-Based, 342, 358–366 Construction Grammar(CxG), 643 context, 498 context-free grammar, 203, 419, 487, 488, 551 probabilistic (PCFG), 317, 570 context-sensitive grammar, 419, 488, 489, 551 context-free grammar, 85 contraction, 90 contrast, 527 control, 35, 226 Control Theory, 91, 144 conversion, 27 cooperativeness, 437 coordination, 16–17, 4823, 65, 79, 122, 167– 168, 19914, 259, 277, 34324, 475, 52912, 547 test, 10–11, 16 copula, 45, 572 Copy Theory of Movement, 15515, 156, 172, 176<sup>43</sup>

core grammar, 93, 315, 533, 541 CoreGram, 269 corpus, 6 corpus annotation, 370 corpus linguistics, 508, 705 coverb, 455, 471 creole language, 482–483 critical period, 480–481 cycle in feature description, 215, 299, 345, 405 transformational, 467, 525, 561 D-structure, 87, 128, 144, 309, 530 declarative clause, 105 deep structure, *see* D-structure Deep Structure, 310 Definite Clause Grammar (DCG), 81, 221 deletion, 524 DELPH-IN, 267 dependency, 479 Dependency Categorial Grammar, 265 Dependency Grammar (DG), 96, 117, 173, 196, 19914, 443, 521<sup>1</sup> , 65649, 662<sup>53</sup> Dependency Unification Grammar (DUG), 411, 551, 703 depictive predicate, 29015, 31126, 567–569 derivation, 27, 123, 324, 692 derivation tree, 422 Derivational Theory of Complexity (DTC), 523<sup>3</sup> , 523–526 descriptive adequacy, 451 determiner, 24, 53 as head, 29 directive, 669 disjunction, 212–213, 239 Distributed Morphology, 671 *do*-Support, 545 dominance, 54 immediate, 54, 187, 271, 326 economy transderivational, 144 elementary tree, 420 ellipsis, 29015, 48731, 524, 52912, 558, 571, 670 empty element, 15<sup>8</sup> , 68, 113, 155, 160, 166, 170,

299, 309, 316, 327, 426, 435, 442,

456, 459, 460, 558, 571–588, 665, 677 PRO, *see* PRO empty head, 670, 679 endocentricity, 95 entity, 190 epsilon, 573 epsilon production, 68 escape hatch, 464 event, 285 evidence negative indirect, 507 evokes operator, 345 Exceptional Case Marking (ECM), 562 experiencer, 30, 92, 235 explanatory adequacy, 451 Extended Projection Principle, 136 Extended Projection Principle (EPP), 458, 539 external argument, 93 extraction, 391, 462, 466–469, 572, 622 from adjuncts, 552 from specifier, 552 island, 466 subject, 533 extraction path marking, 307, 353, 385 extraposition, 10414, 14812, 172, 401, 437, 462–466 f-structure, 193<sup>2</sup> , 225–229, 558, 561, 566 Faculty of Language in the Broad Sense (FLB), 478 in the Narrow Sense (FLN), 479 feature adj, 226, 232 arg-st, 274, 558, 586 comps, 272, 332 comp, 226 cont, 645 daughters, 169 df, 243 dsl, 294, 297, 304 evokes, 345 focus, 226, 241

> gen, 284 head-dtr, 275

```
head, 280, 340–341
     initial, 278
     lex-dtr, 292
     mc, 188
     mod, 287
     mother, 331
     non-head-dtrs, 275
     num, 284
     obj, 226
     obl, 226
     per, 284
     pred, 225
     que, 304
     rel, 304
     sem, 645
     slash, 304
     specified, 33821
     spr, 272
     subj, 226
     synsem, 332
     topic, 226, 241
     val, 319
     xarg, 334, 33820, 564
     xcomp, 226
     checking, 130, 145
     deletion, 130
     strong, 127
     weak, 127
feature description, 207–219
feature structure, 217–219
feature-value structure, 207
feral child, 481
field
     middle-, 44, 101
     post-, 44
     pre-, 44, 101
filler, 198, 318
focus, 107, 145, 153–154, 225–226, 241, 468
Foot Feature Principle, 200
foot node, 432
formal language, 500
formalization, 6, 316, 546
forward application, 168, 248
FoxP2, 485–486
fronting, 10, 13–16
     apparent multiple, 179
                              , 154, 171, 399,
           400, 65143, 66253
```
function composition, 271, 311, 685 functional application, 190 functional uncertainty, 242, 328, 566, 639 functor, 248, 548 future, 20, 544 gap, 89 gender, 20, 24, 58, 284, 310, 516–518, 561 gene, 485–486, 700 Generalized Phrase Structure Grammar (GPSG), 75, 10212, 123, 125, 221, 237<sup>8</sup> , 239, 248, 262, 271, 278, 280, 294, 299, 308, 309, 327, 331, 344, 384, 40330, 419, 421, 425, 443, 474, 511, 513, 523, 529, 53114, 572, 703 Generative Grammar, 54, 83, 474, 511 Gesetz der wachsenden Glieder, 561 glue language, 230 glue semantics, 229–232, 377, 687 goal, 235 government, 33, 120 Government and Binding (GB), xi, 75, 84, 86, 87, 89, 90, 102, 108, 122, 194, 196<sup>5</sup> , 225, 227, 228, 230, 236, 237, 248, 251, 254, 255, 262, 271, 294, 309, 310, 316, 399, 421, 426, 438, 458, 460, 493, 523, 523<sup>3</sup> , 525, 558, 561, 564, 568, 571, 579, 65348, 671, 677, 678, 705 gradability, 513–514 Grammar Matrix, 269 grammatical function, 148, 233, 289, 318, 458–460, 704 governable, 225 Greediness Constraint, 535, 536, 536<sup>2</sup> head, 28–30, 229, 278 head domain extended, 236–237, 311 head feature, 30 Head Feature Convention (HFC), 184, 280 head movement, 178 Head-Driven Phrase Structure Grammar (HPSG), 75, 96, 10111, 11626, 123, 125, 145<sup>8</sup> , 154, 156, 162, 168, 173, 193<sup>2</sup> , 196, 19914, 203, 244, 245, 254<sup>6</sup> , 255, 267–311, 316, 319, 320,

332, 343, 344, 358–366, 378, 384, 386, 391, 399, 402, 414, 419, 430, 443, 458, 461, 474, 48731, 497, 513, 516, 523, 525, 529, 53114, 532, 540, 551, 558–560, 563, 564, 567, 572, 574, 580, 585, 588<sup>5</sup> , 625, 656, 66253, 683, 690, 694, 703, 705 Constructional, 338 Heavy-NP-Shift, 327 Hole Semantics, 580 hydra clause, 457 hypotenuse, 345 iambus, 537 Icelandic, 292<sup>16</sup> ID/LP grammar, 187, 237<sup>8</sup> , 271, 424 identification in the limit, 487–490 ideophone, 471 idiom, 342, 541, 564–569, 61825, 644 imperative, 20, 35 implication, 277, 282 index, 155 indicative, 20 infinitude, 4<sup>1</sup> , 471–479 inflection, 19–21, 88, 692 inflectional class, 21, 58 information structure, 107, 153–154, 225, 351, 469, 616<sup>22</sup> inheritance, 244, 286, 318, 319, 324, 616, 643– 645, 679, 704 default, 123, 412, 644 multiple, 212, 430 instrument, 235 Integrational Linguistics, x interface, 359 interjection, 22, 23 interrogative clause, 48–49, 276, 676–679 intervention, 138 introspection, 705 inversion, 571 IQ, 485 Kleene star, 192, 236 label, 154–160, 169 -abstraction, 61 -calculus, 60

language formal, 120, 487 language acquisition, 118, 134, 248, 316, 451, 46211, 557 language evolution, 316 learnability, 489 learning theory, 500 left associativity, 249 lexeme, 26 Lexical Decomposition Grammar, 316 Lexical Functional Grammar (LFG), 37, 38, 75, 10111, 125, 154, 173, 177, 180, 193<sup>2</sup> , 196, 202, 221, 294, 308–311, 316, 327, 377, 386, 435, 458, 459, 474, 511, 513, 523, 523<sup>3</sup> , 529, 531<sup>14</sup> , 532, 551, 558, 559, 563, 564, 566, 572, 576, 578, 606, 61623, 650, 656, 690, 703, 705 lexical integrity, 233, 316, 614, 617, 664 Lexical Mapping Theory (LMT), 234–236, 294 Lexical Resource Semantics (LRS), 284<sup>9</sup> lexical rule, 195, 253, 289–588 verb-initial position, 297–299 lexicon, 88–283 linear logic, 230 linear precedence, 187, 271, 326 Linear Precedence Rule, 188 linearization rule, 170, 233, 344 Linguistic Knowledge Builder (LKB), 267 Link Grammar, 370 linking, 93, 234–236, 274, 285–286, 319–324, 376 list, 209, 715 difference, 715 local maximum, 536 locality, 153, 276, 331–338, 342, 422, 442, 560–569 of matching, 138 locative alternation, 658 Logical Form (LF), 88, 90–91, 309, 443 long-distance dependency, 105–109, 197– 200, 239–243, 256–259, 297<sup>17</sup> , 302–308, 327–328, 331, 381–386, 430 lowering, 100 LP-rule, 188


,

macaque, 481 machine translation, 370 macro, 244, 309 Mainstream Generative Grammar, 83 Mandarin Chinese, 455, 490 Markov model, 507 matryoshka, 4, 7 maturation, 536 Meaning–Text Theory (MTT), 369, 370<sup>2</sup> , 377, 411<sup>36</sup> meaning constructor, 230 memory, 524<sup>4</sup> Merge, 145, 154, 457, 47123, 47527, 482 External, 145, 697 Internal, 145 metarule, 188–190, 195–196, 560 metrical grid, 271 metrics, 537 Middle Construction, 322, 324 middle field, *see* field mildly context-sensitive grammar context-sensitive grammar, 419, 442, 477, 551, 555 Minimal Recursion Semantics (MRS), 276, 284, 28811, 319, 331, 424, 580, 638 Minimalist Grammar (MG), 145<sup>8</sup> , 164–172, 176, 311, 551, 552 Minimalist Program (MP), 127–174, 194, 359, 396, 456, 457, 475, 513, 515, 527, 588<sup>5</sup> , 65348, 678, 703, 705 Missing VP effect, 522 MITRE, 122 model, 217–219 model-theoretic grammar, 83, 270, 474, 511– 519 modifier, 34, 91 modularity, 527 module, 531 modus ponens, 231 mood, 20 morphology, 88, 176, 233–690 Move, 145 movement altruistic, 154, 170 covert, 151 feature-driven, 153 permutation, 12

movement test, 9–10 Move , 89 Multi-Component TAG, 424, 425, 427 music, 478, 484 nativism, 452 negative evidence, 488, 492, 495, 507–509 negative polarity item, 132 Neo-Davidsonian semantics, 638 neural network, 498 neutral order, 114 New Prague School, 369 Nirvana, 538 No Tampering Condition (NTC), 359<sup>33</sup> 529<sup>12</sup> node, 54 child, 55<sup>3</sup> daughter, 55 mother, 55 parent, 55<sup>3</sup> sister, 55 nominalization, 109, 508 Non-Tangling Condition, 95 nonlocal dependency, 325 noun, 18, 20, 24, 53, 68, 93, 470 common, 342 mass, 69 relational, 66 NP-split, 14 nucleus, 372 number, 20, 56, 58, 284, 310 numeration, 128, 176 o-command, 301<sup>21</sup> object, 310, 458–460 direct, 38, 39 indirect, 38, 39 obliqueness, 29015, 30121, 458 observational adequacy, 451 Off-Line Parsability, 551 Optimality Theory (OT), x, 523<sup>3</sup> optional infinitive, 543 optionality, 32, 34 order unmarked, 114 organ, 484 paradigm

inflection, 20, 26 parameter, 86, 46211, 482, 533–541, 691 default value, 535, 536, 539 head direction, 454–455 head position, 86 pro-drop, 496, 497, 533, 537–539 subjacency, 469, 533 subject article drop, 537 SV, 534 topic drop, 537 V2, 534, 536 parser, 552 Parsing-as-Deduction, 122 partial verb phrase fronting, 202–203, 699 participle adjectival, 140 particle, 22, 23 passive, 109, 114, 139–142, 193–196, 233–236, 251–253, 289–324, 330, 377–378, 429–430, 524, 565, 585, 61622, 621, 622, 690 impersonal, 110, 113, 195, 196, 293, 571 long, 31025, 312 remote, 37811, 596 path, 208 path equation, 342 patient, 30, 92, 235 PATR-II, 221, 342 performance, 317, 358–359, 435–441, 467, 469, 473, 521–532, 537, 548, 639, 699 periphery, 93<sup>5</sup> , 315, 541 permutation test, 9–10 person, 20, 56, 284 phase, 129, 359, 527 phenomenon, 217–219 Phonetic Form (PF), 88–90 phonology, 271 phrase, 7 phrase structure grammar, 53–59, 248, 271, 275, 511, 571 pidgin language, 482 pied-piping, 260, 414<sup>39</sup> pivot schema, 545 plural, 20, 69 Polish, 564 polygraph, 393

positional, 471 postfield, *see* field postposition, 22 Poverty of the Stimulus, 461, 46211, 482, 486– 509 predicate, 91 predicate logic, 30 predicate-argument structure, 479 predicative, 40–42 prefield, *see* field ellipsis, 290<sup>15</sup> preposition, 18, 22, 71–73, 93 present, 20 presupposition, 468 preterite, 20 PrinciParse, 123 principle Case, 292–294, 693 Generalized Head Feature, 413 Head Feature, 280, 340–341 nonlocal feature, 305 Semantics, 286, 693 Sign, 332<sup>14</sup> Subject-Mapping, 235 Principles & Parameters, 86–87, 533–541 PRO, 426 Probability Matching, 483 progressive, 545 projection, 29, 95 maximal, 29, 77, 96 of features, 30 projectivity, 376, 382, 396, 404–410, 425, 597 pronominalization test, 8 pronoun, 24–692 expletive, 11–12, 26, 31, 107, 113, 459, 496, 538, 571 reflexive, 284, 559 relative, 572 prosody, 259, 515, 527 quantification, 232 quantifier existential, 90 universal, 90, 191 question tag, 564 raising, 29216, 429, 563, 638–639 Random Step, 536

Rangprobe, 47 recursion, xi, 4<sup>1</sup> , 49–50, 65, 441, 471–479, 506, 521, 521<sup>1</sup> , 569–570 recursively enumerable language, 488 reference, 11, 12 regular language, 488, 551 relation, 560 ⃝, 404 *append*, 221, 277, 301 shuffle, 404 Relational Grammar, x, 512 relative clause, 32, 47–49, 276, 383, 414, 469, 521, 676–679 free, 157–158, 290<sup>15</sup> repetitive, 579 representational model, 123 REQUEST, 122 restitutive, 579 resultative construction, 534, 587 rewrite grammar, 487 Right Roof Constraint, 462 right to left elaboration, 499 Right-Node-Raising, 162 rising, 383, 597 Role and Reference Grammar, x, 156 root, 691 root Infinitive, 543 rule-to-rule hypothesis, 62, 191 S-structure, 87, 144, 309, 443, 530 satellite, 372 Satzglied, 13<sup>4</sup> schema Filler-Head, 305 head-adjunct, 288 Head-Complement, 277 scope, 104–116, 19914, 233, 327, 424, 571, 662<sup>53</sup> segmentation, 543 selection, 31 restriction, 438 self-embedding, 47225, 521, 569, 570 semantic role, 30, 92, 234–236, 285, 319, 377, 635 semantics, 176, 286 sentence bracket, 44, 101 sentence symbol, 283

set, 209, 328–330 sexus, 517 Shortest Move Constraint (SMC), 16532, 552 sideward movement, 171 sign language, 483 signature, 217 Single Value Constraint, 535, 536 singular, 20 Situation Semantics, 284<sup>9</sup> specifier, 75, 97, 160–165, 552 statistics, 224, 248, 317, 420, 498, 501–509, 665 stem, 692 strength, 127 Structure Preservation, 529<sup>12</sup> structure sharing, 213–215, 272, 294, 342, 384, 525 subcategorization, 91, 184, 272 subcategorization frame, 91 subjacency, 144, 462–470, 533 subject, 34–37, 97, 110, 310, 318, 458–460, 564 Subject Condition, 458 subjunctive, 20 substitution, 420 substitution test, 7, 8 subsumption, 329 superlative, 21 surface structure, *see* S-structure SVO, 535 symbol non-terminal, 487 terminal, 487 syntax-semantics interface, 60 T model, 87–89 TAG Free Order (FO-TAG), 425 Tamagotchi, 480 tense, 20, 24 text, 475<sup>26</sup> that-t, 533 *the*-clause, 678 thematic grid, 91 theme, 92, 235 theory, 217–219 Theta-Criterion, 92, 144, 228, 245 -grid, 91, 225


unary, 55 underspecification, 579 Underspecified Discourse Representation Theory (UDRT), 580 unification, 215–217, 250, 329 uniformity, 460, 571 universal, 453–479 implicational, 453<sup>4</sup> implicative, 700 Universal Base Hypothesis, 703 Universal Grammar (UG), 145, 147, 248, 452, 595, 673 as a toolkit, 471 falsifiability, 46919, 471, 477 unrestricted grammars, 551 Unsupervised Data-Oriented Parsing (U-DOP), 501–507, 665–667 valence, 31–34, 57, 184, 225, 248–249, 271– 272, 285, 420 change, 678 classes, 88 valence frame, 91 verb, 18–19, 24, 53, 93, 470, 691 -final, 43 -first, 43 -second, 43 AcI, 109 auxiliary, 491 bivalent, 43 ditransitive, 42, 11121, 534 ergative, 93<sup>4</sup> inherently reflexive, 16, 31 intransitive, 42 modal, 491, 545 monovalent, 166–167 particle, 524, 534 perception, 109<sup>20</sup> serial, 675–676 subjectless, 329 transitive, 42, 289<sup>13</sup> unaccusative, 93, 117, 140, 236, 289<sup>13</sup> , 630, 691 unergative, 28913, 691 verb position, 105, 196–197, 236, 253–255, 294–327, 429, 671, 673 -second-, 105

Subject index

verb-final language, 105, 237 verb-second language, 106 verb-particle, 5 verbal complex, 119, 639 Verb*mobil*, 177, 269, 363<sup>36</sup> visual perception, 478 Vorfeld, 17<sup>9</sup> Wernicke's area, 483 *wh*-exclamative, 678 Williams Syndrome, 485 word, 692 Word Grammar (WG), 370, 373, 378<sup>11</sup> , 379<sup>12</sup> , 384, 385, 41136, 416 word sequence, 7 X theory, 73–77, 79, 84, 93–96, 118, 122, 123, 127, 131, 144, 155, 160–164, 170, 251<sup>3</sup> , 287, 455–457, 530, 557, 558, 561, 679

XP, 75

# Grammatical theory

This book introduces formal grammar theories that play a role in current linguistic theorizing (Phrase Structure Grammar, Transformational Grammar/Government & Binding, Generalized Phrase Structure Grammar, Lexical Functional Grammar, Categorial Grammar, Head- Driven Phrase Structure Grammar, Construction Grammar, Tree Adjoining Grammar). The key assumptions are explained and it is shown how the respective theory treats arguments and adjuncts, the active/passive alternation, local reorderings, verb placement, and fronting of constituents over long distances. The analyses are explained with German as the object language.

The second part of the book compares these approaches with respect to their predictions regarding language acquisition and psycholinguistic plausibility. The nativism hypothesis, which assumes that humans posses genetically determined innate language-specific knowledge, is critically examined and alternative models of language acquisition are discussed. The second part then addresses controversial issues of current theory building such as the question of flat or binary branching structures being more appropriate, the question whether constructions should be treated on the phrasal or the lexical level, and the question whether abstract, non-visible entities should play a role in syntactic analyses. It is shown that the analyses suggested in the respective frameworks are often translatable into each other. The book closes with a chapter showing how properties common to all languages or to certain classes of languages can be captured.

"Stefan Müller's recent introductory textbook, "Grammatiktheorie", is an astonishingly comprehensive and insightful survey of the present state of syntactic theory for beginning students." Wolfgang Sternefeld und Frank Richter, *Zeitschrift für Sprachwissenschaft*, 2012

"This is the kind of work that has been sought after for a while. […] The impartial and objective discussion offered by the author is particularly refreshing." Werner Abraham, *Germanistik*, 2012

"These two volumes represent one of the first attempts since Sells' (1985) seminal work to provide theoretical linguists and those who work closely with them with an overview of the general representational machinery of contemporary frameworks and the key issues that separate those who prefer one over another. In general, the presentation of empirical data and theoretical concepts is highly accessible to scholar and student alike. The best use of these materials is for those seeking to gain a better understanding of the core concepts that motivate the general representations present in these frameworks. Although there are traits that are shared across many of these covered here, there are also fundamental differences that persist. These volumes at the very least enable those with different perspectives on key issues to engage in discussions and perhaps gain a better understanding and appreciation of each other's research moving forward. In closing, contra Sternefeld and Richter's (2012) somewhat pessimistic statements directed at an earlier version of this work and toward the state of generative grammar a priori, I view Müller's work here in a positive light as a conduit that has the potential to bring formal linguists together to gain a fuller appreciation of theoretical work beyond their own immediate communities." Michael T. Putnam, *Glossa*, 2017

"Here is a grand piece of work that should be read and taken seriously by grammarians of all stripes. It is a highly welcome antidote to the dominant trend of splendid isolation among the various schools of grammar theory in the past decades. This up-to-date, analytic, comparative and critical review of current major schools of grammar research is an offer one can't refuse." Hubert Haider, 2020