Just another WordPress.com site
Monthly Archives: December 2011
December 29, 2011Posted by on
So I noticed that this XKCD comic excoriates computational linguistics (“fuck computational linguistics!”). The point of the cartoon seems to be that CL is so “ill-defined” that practitioners accept and use inconsistent theories (linguistic ones, I guess). I guess the cartoon implies that they do this without criticism from their colleagues, or that they don’t take critical work to heart, and prefer to play fast and loose.
I’m not sure that CL researchers are to blame for the ill-definition of CL. For that matter, I’m not sure that ill-def. of the field itself is a bad thing. The ill-definition of the term reflects theoretical disputes within the field. At the same time, terms like “computational linguistics” can help to maintain funding for research that’s necessary to bring better definition to the bounds of the field anyway. James Hendler argued that AI people cut off their noses in the ’80s by defining expert systems out of Artificial Intelligence. AI’s grand poobahs disowned their field’s only tangible success at the time, leading the U.S. gov’t to conclude that only fruitless research was really AI. So, ill-def. of young research fields may be necessary in practice for those fields to define themselves better. I doubt that this sort of ill-def. itself was the target of the XKCD toon’s wrath, though. I think that the XKCD toon’s critique targets some hypothetical, individual CL researcher who holds theory A and B, where A entails the negation of some claim in B. Or, at least, this nerd does research that presupposes two linguistic theories that are inconsistent with each other, without reflecting on this conflict or heeding colleagues’ criticism. Naturally, I wonder what (inconsistent) linguistic theories are supposed to be so often combined.
It bears mentioning that I am no CL researcher (never mind linguist). I’m a doctoral student in philosophy with a back-burner programming project (NLP) that leads me into CL lit. So far, my fun-time research on CL has led me to believe that semantics is where the action’s at in CL. That is, I’ve come to believe that developing the research program called computational semantics is ultimately the point of a certain strain of NLP research. It’s that strain that takes Richard Montague’s brand of truth-conditional semantics for natural language as a computational model for building data structures to represent sentence meanings. I doubt that this is the case in linguistics, generally speaking, but it seems to hold for the kind of CL that developed out of NLP research. I know that a lot of people use statistical regularities to infer meaning, due to practical failures in parsing by strictly logical and rule-based heuristic means. My point is that syntactic work like parsing is either less sexy to CL people than semantics, or they realize that it’s so entangled with semantics that purely syntactic work cannot be an end in itself for CL.
Theme: computational semantics is or is not ill-defined to the point where researchers rely on inconsistent theories. Discuss.
So, I think that we can distinguish a kind of computational semantics that does fit this bill from a kind that doesn’t. And the distinction I have in mind has nothing to do with the technical divisions in the field, like the logic/statistics division.
CompSem. for the sake of NLP, like question-answering systems and automatic summary, can do neat and productive work (I hope), without its designers taking a position on fundamental disputes in linguistic semantics or the philosophy of language and cogntive science. Jerry Fodor warns in
- LOT 2: The Language of Thought Revisited
against the monsters that lie waiting in “‘computational’ semantics,” because it belongs to a bunch of theories that confuse epistemology with semantics (concept pragmatism, instrumentalism, etc.). What Fodor’s calling epistemology is making out dispositions to discover things to define conceptual content. E.g.: I have the concept GREEN only if I know how to find green things. This confuses the epistemological problem of evidence (the foundation of empirical knowledge in empirical instances of some category) with the genuinely semantic problem of conceptual content. But propositional annotation programs, e.g., need not presuppose Fodor’s “‘computational’ semantics.” I think that Fodor’s “‘computational’ semantics” is more accurately labeled “semantic computationalism:” the doctrine that the ultimate constituents of sentence meaning (and the contents of intentional states like belief) are dispositions to do something. The only task in computational linguistics that gets down to the level of conceptual content (that I know of) is lexical semantics, i.e., assigning words to the right places in a syntax tree, based on the structure of the lexicon. But the definitions of concepts in a lexical database need not be designed according to one philosophical theory of conceptual content or another, in order for the program based on that lexical database to do cool stuff. If Fodor is ultimately right, for example, and meaning reduces to reference, then it may present a problem for Strong AI believers building a system to *understand* language as we do. I doubt that people doing computational semantics (as opposed to semantic computationalism) will have to worry about this result, though. Computational semantics, in my understanding, simply claims that computers can do, e.g., deduction on a knowledge base extracted from text in a natural language, and that some of these inferences might be useful. Semantic computationalism (a la Brandom) makes the philosophical claim that conceptual contents are determined by the roles those concepts play in (valid) inferences, which relate them to each other in a network of inferential moves.
I’m not sure that this distinction changes anything for those who agreed with “fuck computational linguistics.” I do think that key areas of philosophical troubles, which conceal many inconsistent (linguistic) theories, are only relevant to a CL program motivated by Strong AI, a.k.a., comprehensive modelling of human cognition. I suppose that other CL researchers believe (and hope, I guess) that computational projects in linguistics can be useful or enlightening about the nature of meaning without reconstructing human cognition.
December 9, 2011Posted by on
Listen to 0:57 of “Kracked” on You’re Living All Over Me. Now listen to “‘Cross The Breeze” from Daydream Nation at 5:53. I guess SY weren’t kidding when they said the album was Mascis-influenced 🙂