a crisis of conscience

what if i don't hate model theoretic semantics as much as i once thought?

categorial grammar, as i shakily understand it, goes a little something like this: sentences can be broken down into sytactic constituents, and those pieces can be broken down (into words, say). Each constituent, though, say 'to the store' has meaning that is constant over most of that constituent's appearances in other sentences. Thus, meaning (semantics) and structure (syntax) should go together. if 'the store' has a meaning, it should be some kind of unified structure. ditto 'to the store,' and 'ran to the store.'

already we've made a lot of assumptions i'm not comfortable with: that sentences are the largest semantic unit (and that they express truth conditions), that meanings of constituents are consistent over multiple sentences/discourses, that meaning is compositional (built up from smaller units in a systematic way). yuck.

but, on the other hand, it's a pretty elegant solution once you accept all that.

categorial grammar treats each constituent as a function that operates on sets: input a noun into a verb, and you get a truth value (true if the noun is a member of the set of things that does the verb, false if it isn't). and these functions are identical to the structural categories that build the sentence up syntactically.

i'd love it if i were a math person, but i feel obligated to hate it, since it's a lifeless, bloodsucking way to look at language. i think.


Seb said...

As somebody who gets accused of being a math person often, I can vouch for model theoretic semantics giving me the heeby-jeebies also.

It's not the lifelessness of it that bothers me, though--after all, I most of my academic time trying to figure out how to reduce human mentality to mechanical computation and math. But the psychological unreality of it all is absurd. There's no way that 'writes a blog' denotes a set of things that write blogs, to me. Where is that set? How could I comprehend it? Is it out in the world? Is it in me?

I had a talk with a Cog Sci professor of mine who I respect a lot about this today though. This guys is all about Bayesian probability, which breathes life into otherwise undead statistics by making probability and induction into purely subjective states. I thought he'd be sympathetic when I explained that all my hunches reject MTS.

But no! He claims that probability requires MTS. "And you believe in probability, don't you?"

But there's got to be another way. There's got to be. It just wouldn't make sense otherwise.

Marc André Bélanger said...

“input a noun into a verb, and you get a truth value (true if the noun is a member of the set of things that does the verb, false if it isn't).”
Actually, it doesn’t have to be a truth-condition per se; the theory I was working in would say that, in order to be full, it needs to be (semantically) supported by a noun (if it is not full, it becomes more expressive, as in imperatives or some questions). If that noun does not denote something that does the verb, it is not necessarily false; it could be metaphorical, it would stretch the meaning, sometimes to the breaking point (where were not talking of metaphors but nonsense). So we don't really need MTS.

As for “to the store” being roughly the same everywhere, that would mean that there is only one way these three words could mix. If we see words as sets of potential meanings, “to the store” could be considered as some sort of intersection of the three words; doesn’t mean that there’s only one potential meaning in that intersection.

Related Posts with Thumbnails