03 February 2026

Blogspot Bingo: Family Resemblance


google query, 11 dec 2025:
wittgenstein family resemblance site:blogspot.com


Philosophical Perspectives in Clinical Psychology
Essences and Family resemblances

perhaps even Wittgenstein - but certainly those who have taken his ideas to warrant talk of types of concepts called 'family resemblance concepts' - stray too far towards equating essences and necessary/sufficient conditions. Just because various instances of a phenomenon may have no one thing in common apart from their being instances of that one phenomenon - no one further thing in common, one might say - does not, I contend, imply that the phenomenon has no essence.

A worthy insight.

I think I know what necessary/sufficient conditions are. What is an essence? If I try to define it in a way that allows me to agree with the above, I can conjure only fanciful metaphors.

e.g. It's possible (likely, even) that the 'average' of a set of numbers does not itself appear in the set.

Here as always, the 'truth' of the average is a matter of what you're trying to do with it. So with essences, I would think.

In this metaphor of 'averages', essences are reductive. Is there a constructive version? I don't think so.

"It is far easier to analyze [someone] than to synthesize him."

(E.O. Wilson, Consilience, p. 83)


Find posts labeled 'Wittgenstein' here.

From an "addendum" here:

The concept of 'family resemblances' is often put forward as a counterpart to the idea of a concept articulable in terms of necessary and sufficient conditions. I think it fair to say that, to someone still in love with the analytical philosophical project, the very idea of a 'family resemblance concept' looks a poor thing. If one still feels an urge to 'know' phenomena in the way that the analytic metaphysician wants to make their acquaintance - which is rather, one might say, a way of forcing them to make his acquaintance - then to be told that a concept has no 'essence' but only, rather, a set of 'family resemblances' looks a cop out. Well, to be honest, and mindless now of whatever Wittgenstein might have meant by it, the concept looks a poor thing to me too. The problem, it seems to me, is not with the idea that phenomena have essences, but that the concept of an 'essence' is the concept of what corresponds to a list of defining characteristics. From the standpoint of the analytical philosopher, the concept of 'family resemblances' fails to scratch his itch. If you stay in the gloom of the cave, the concept of 'family resemblances' will shine but the feeblest light on the phenomena. My claim here has been that the philosopher's itch is caused by a vitamin D deficiency - which will lift when we help him out into the sun, help him encounter the phenomena on their own terms, in their own habitats, and cease trying to drag them back into his cave for dissection by lamplight.


Meanwhile, back at the ranch . . .

he [the narcissist] wishes to still have his normative cake even whilst popping it down the cakehole of his own subjectivity.

This too was just what we found with our private linguist. He wanted to be able to inwardly institute, and accord with, actual norms for the correct use of ‘S’, but also to continue to enjoy his inviolable first-person authority regarding his suffering of S. He refused to accept that the coin which has first person authority (of his avowals of S) as one of its faces must have an availability to appraisal (of the correctness of his use of ‘S’) as its other. It is this fantasy of normative self-sufficiency and yet inviolability – of being able to purchase the goods of normative warrant without handing over some of his first-person authority – which – I’m suggesting – constitutes a key structural element of the intrapsychic heart of narcissism.

Call this narcissism, or call it something else. It is rampant in any case.

I am mildly disappointed, actually, at the specter of sacrificing the breadth of this general observation to any narrower -ism, even to one which is as pop-topical as "narcissism" has become. We wish to purchase the goods of normative warrant without handing over some of our first-person authority. Wouldn't all of us take this if we thought we could get it? How much of "narcissism" is a crime of opportunity rather than a crime of passion?

Elsewhere our author says,

The psychoanalytic theory of narcissism is complex. There are many different sub-plots; 'narcissism' may best be described as a 'family resemblance concept' (Rustin); and there exists a tendency (which may not be a bad thing) for the theory of narcissism to become more a theoretical lens through which all psychopathology (depression, personality disorders, schizophrenia) is understood than a set of testable claims of empirical psychology.

Rather than viewing everything through the theoretical lens of narcissism, I would prefer to view narcissism through the lens of everything else.

This suggests a behavioristic definition of the 'culture of narcissism': an entire culture has become narcissistic when psychotypical people who have healthy relationships and healthy consciences can nonetheless be found grabbing all the normative warrant they can carry without handing over anything in return. We find them here not due to their 'fixed traits' (which conventionally counterindicate "narcissistic" behavior) but rather due to 'situations', contingencies of 'culture' which channel them (and everyone else) in this direction.

e.g.

(1) The Carrot: there exist today all sorts of mediating mechanisms which enable us to do this; in the given example of the man talking loudly on the bus, this is the public bus itself and, of course, the cell phone. Neither on its own is entirely sui generis, but their combination, as given in the anecdote, has some unique aspects; moreover, it is a combination which is contingent upon myriad other 'mechanisms', institutions, behaviors, etc.; that is, upon an entire 'culture' and its various artifacts of 'cultural evolution'.

(2) The Stick: owing to structural factors, the opportunities to make a fairer exchange of "authority" for "goods" are increasingly appropriated and hoarded by a status elite. In Freddie deBoer's words, "our culture has more ways to be a loser than a winner." The "losers" thus are driven to take what we can get, when and where we can get it. The normative warrant market looks a lot like the housing market: no one can save up enough first-person authority to get together a down payment; and anyway, forking over one's entire income and savings for housing leaves nothing for food. The market is global and there are 80 million one-percenters to contend with.

In short, most of us have no first-person authority to exchange because our 'culture' has bled us dry. By the time we've bought back our dignity from the bureaucrats and the rentiers, there's no budget left for status.

Once exchange has ground to a halt, the goods pile up on the dock and theft ceases to be punished. When everyone constantly breaks the rules, the rules become unenforceable.

Self preservation is, or should be, the first law of nature. The animals, when in a natural state, are showing us the way. When they are hungry they will always try to get something to eat or else they will die in the attempt. That's natural; to starve to death is unnatural.

('THE LAST LETTERS OF JOE HILL', in Rebel Voices: An IWW Anthology ed. Joyce Kornbluh, p. 150)

I am proposing that people can behave like narcissists without being narcissists; it's the behavior that matters to the culture, not the inner life; it's the conditioned response to circumstance and environment that has become determinative. The apogee of the 'culture of narcissism' is narcissism without narcissists. In the analogy to crime, this is when people really are stealing just to survive, i.e. because it's the only "natural" avenue of "self preservation" that remains open to them.

Now, in the given example, the too-loud bus-talker is presumed to know both "normative warrant" and "first-person authority". These constitute a key structual element of the narcissist's psychic condition. I am proposing, instead, that we not presume to know what he knows.

Parsing social action is one thing. Parsing motivation, explicit knowledge, or self-awareness is something else entirely. To effect meaningful change, we must ascribe the warrant-authority conflict to the situation, and to the culture; and to both strictly in the materialist sense.

Shorn of voyeuristic inferences and reduced to 'logical behaviorism', the situation on the bus is greatly simplified. No longer is Our Man trying to have it both ways. He is, in fact, trying to have it exactly one way. Moreover, he succeeds in having it, while the 'silent majority' is impotent to deny him.

This is neither a social nor a psychic problem. Either it is an engineering problem, or else it is no problem at all.

If we try to read bus riders' minds and to intervene on that basis, nothing will improve. This is what we enjoy (professionals and hobbyists alike), but it cannot fix anything.

Occasionally even card-carrying cognitivists say the quiet part out loud:

I was convinced by [Meehl's] argument that simple, statistical rules are superior to intuitive "clinical" judgments. I concluded that the current [IDF] interview had failed at least in part because

it allowed the interviewers
to do what they found most interesting

which was

to learn about the dynamics
of the interviewee's mental life.

Instead, we should use the limited time at our disposal to obtain

as much specific information as possible
about the interviewee's life
in his normal environment.

(Daniel Kahneman, Thinking Fast and Slow, p. 230)


Finally, a bit of Chesterton:

There is a notion adrift everywhere that imagination, especially mystical imagination, is dangerous to man's mental balance. Poets are commonly spoken of as psychologically unreliable; and generally there is a vague association between wreathing laurels in your hair and sticking straws in it. Fact and history utterly contradict this view. Most of the very great poets have been not only sane, but extremely business-like... Imagination does not breed insanity. Exactly what does breed insanity is reason. Poets do not go mad; but chess players do. Mathematicians go mad, and cashiers; but creative artists very seldom.

And again:

I am not, as will be seen, in any sense attacking logic; I only say that this danger does lie in logic, not in imagination. .... Poetry is sane because it floats easily in an infinite sea; reason seeks to cross the infinite sea, and so make it finite. The result is mental exhaustion... The poet only desires exaltation and expansion, a world to stretch himself in. The poet only asks to get his head into the heavens. It is the logician who seeks to get the heavens into his head. And it is his head that splits.

Whither the critics and the interpreters?

Whither those who litter the Commons with all manner of concrete propositions, all while claiming neither desire nor presumption of logic?

Chesterton again:

Mysticism keeps men sane. As long as you have mystery you have health; when you destroy mystery you create morbidity. The ordinary man has always been sane because the ordinary man has always been a mystic. ... He has always cared more for truth than for consistency. If he saw two truths that seemed to contradict each other, he would take the two truths and the contradiction along with them. ... Thus he has always believed that there was such a thing as fate, but such a thing as free will also. ... He admired youth because it was young and age because it was not. It is exactly this balance of apparent contradictions that has been the whole buoyancy of the healthy man. The whole secret of mysticism is this: that man can understand everything by the help of what he does not understand. The morbid logician seeks to make everything lucid, and succeeds in making everything mysterious. The mystic allows one thing to be mysterious, and everything else becomes clear. ... The one created thing which we cannot look at is the one thing in the light of which we look at everything.

👏 👏 👏


Specter of Reason
Wittgenstein and Family Resemblance

Wittgenstein's point was not that we cannot draw a clear line around the concept of "game," nor was it that such lines could not be interesting or valuable. It was rather that such lines do not precede our use of the expression "game." In order to circumscribe the concept of "game," we must draw distinctions which have not already been made, and which do not underlie every meaningful and valid use of the concept. We need not first have a clearly defined concept before we can meaningfully use an expression. The uses of ordinary language expressions have developed in the absence of clear definitions which fully account for their use.

Similarly,

any circumscription of
the expression 'art'
does not precede our use of that expression.

Such distinctions
have not yet been made
as the art-boat
floats for Parnassus.

The Investigations are more commonly interpreted as a series of tasks which the reader is asked to undertake in order to recover from their (presumed) restricted view of language and meaning. When W. asks us not to think, but to look, he is asking us to perform a specific task for a specific purpose. By asking us to look at what we call games and consider what we see before defining a strict concept of a game, he asks us to consider all the sorts of things we call "games" without first deciding what defines a game as such. He wants us to think about games, but not by analyzing them in terms of a single rule. The point is that we can (and normally do) think about games without analyzing them in terms of an all-encompassing rule, and that if we do draw such a rule, it will (at best) only resemble the way we had previously thought of games, ...

PI, 68: " ... Can you give the boundary? No. You can draw one; for none has so far been drawn. (But that never troubled you before when you used the word 'game.')"

...

Wittgenstein says that one can draw a definition for the concept of game, but that such a definition was not needed for the concept to be useful.

...

Now, it may very well be that all games do have something in common--but having such a feature is not what makes them games. It would be a remarkable coincidence if everything we called a game had one feature in common which was not shared by anything else. For we do not appeal to such a feature when we apply or judge applications of the term "game." That is, unless we are using a strict definition for a special purpose, as W. notes.

So, an interpretation that says "Well, Ludwig didn't mean to deny that games share a common feature in virtue of which they are games, only that that isn't how our concept works; so if you come up with the feature, it wasn't needed"--such an interpretation is surely false. Wittgenstein does deny that all games share a common feature by virtue of which they are games. But this does not mean that games cannot share a common feature. So, pointing out a common feature does not in any way challenge Wittgenstein's point.

as Wittgenstein says, our inability to give a definition for "game" is not a result of our ignorance. It is a result of the fact that no definition has been given. ... We cannot isolate the set of all games because no well-defined set exists as such. But, as W. says, we can define such a set for a special purpose. The application of the concept "game" is not realized in the world ahead of time. We can thus disagree about how to apply the concept of "games" without being able to appeal to a rule to decide who is right--again, not out of ignorance, but because the concept is not completely circumscribed by rules.

... if you will grant ["that W. claims we can employ concepts in the absence of all-encompassing definitions"], Wittgenstein's conclusion seems to follow. That is, unless you suppose that our employment of concepts mysteriously matches with independent rules for their application, even if we do not know of them; in which case, if we do disagree on the application of a concept, there is a sense in which one of us is right, and the other is wrong, even though we have no means of knowing it. Wittgenstein clearly does not embrace such a view ... There is no basis for regarding the set of all games apart from the rules we use to regard that set. If our rules are not all-encompassing, then the set of games is a family resemblance set.

Your objection is that there really is a common feature for all games, and that we have such a feature in mind (somehow, perhaps unconsciously) when we talk about games. However, this is highly unlikely, considering the way we learn how to use the term "games." Even if there is a common feature, such a feature is not obviously what defines games as such. It seems more likely that such a feature would be wholly coincidental, since we do not appeal to it and are so far unaware of it, despite our complex use of the term "game." ... Thus, W. predicts that no such feature is to be found. But to overcome W., it is not enough to suggest a common feature shared by all games. You would rather have to find a feature which defined games as such. That is, you would have to show that the feature in question was operant in our conceptualization of games. To undermine the notion of family resemblance (as it applies to games), you would have to show that our grouping together of games relies on some recognition of that shared feature.

This gets at a problem with the Instutitional Theory that I have, for some reason, had trouble articulating quite this clearly: Are Institutional Theorists concerned to show that their definition really has been operant in our conceptualization of 'art'?

I suspect this is not their project. If it is, then it obviously fails, per all above.

But then, what is their project, if not this?

The search for a definition is itself nothing but a special purpose. Hence the enterprise can succeed, and still, the application of the concept   is not realized in the world ahead of time, nor has it been shown that we now or ever before have recognized that shared feature in the mere run of things.

So . . . what is that purpose in this case? To ask this question is to exit Analytical Philosophy and enter Culture Warring and Twitter Beefing. I'd like to avoid that . . . but still, you can't help but wonder!

I want to say that any Philosopher worth their salt should be able to satisfactorily define their terms. Ditto we the readers: we too should not just want this for them but actually demand it of them. That is the best reason I can think of to continue searching for a definition of art; but it seems that if you accept the Wittgensteinian account then this is no reason at all: by then, your "purpose" has not defined your topic and your usage but actually obliterated both at once. So, perhaps paradoxically, we should end up intentionally avoiding explicit definition and writing as casually as possible, whistling in the dark as it were, saving the explicit definition of terms for those highly artificial or abstract digressions which are self-contained in a way that 'ordinary language' is not. 'It is the logician who seeks to get the heavens into his head. And it is his head that splits.' Profoundly Wittgensteinian indeed. Philosophy is not just prose logic.

I do not think Wittgenstein would disagree with your claim that "we are sometimes guided by commonalities that we do not know how to express." Indeed, we are guided by commonalities. That is the point. We are so guided, but we do not have rules ahead of time which define their boundaries. We do not always know how and when to apply them appropriately--and this is not a matter of ignorance, but simply a lack of definition. What lends credence to Wittgenstein's view is not merely the fact that we do not learn language solely by applying dictionary definitions, though that is certainly evidence in W.'s favor. Rather, W. points out problems which arise when you try to think how the conception of "game" could somehow fully account for its use ahead of time, as if all of the possible movements of a machine were there in the machine before it was ever used. Once we realize that the use of a term is not determined in advance, a whole host of philosophical problems disappear.

Find posts labeled 'Wittgenstein' here.

Here is one called Ryle On Rules And Creativity:

Wittgenstein once said that only two people understood his philosophy, and that one of them was Gilbert Ryle.

When we speak of minds, we are not speaking of entities, states, or events. The language we use to talk about minds employs a logic of dispositions, and not simply of occurrences. Rather than think of the mind as a particular place or thing, Ryle asks us to imagine it as a complex set of abilities, capacities, skills, and so on. These are observed "in the long term," as Dennett puts it, and not as discrete entities, states, or causes.

... when we say a glass is brittle, we are saying something about how the glass will act in certain circumstances. Similarly, to say that a person has a mind is to say that they act in special sorts of ways--ways which exhibit intelligence. Of course, minds, like brittleness, may be explained in terms of causes and effects. However, neither minds nor brittleness are particular structures or particular sets of causes and effects.

Ryle's view has been associated with behaviorism, since it defines the mind in terms of observable behavior. (It is sometimes called "logical" or "philosophical" behaviorism, to distinguish it from the psychological behaviorism commonly associated with B. F. Skinner.) ... Ryle's argument is not that we do not have private thoughts, or that we do not imagine, think, or feel. ... Rather, he says that the marks of the mental are not intrinsically private. ... Intelligent behavior is not the product of intelligence; it is intelligence itself.

Psychologists have recently found evidence that children are able to attribute knowledge-that before they are able to attribute knowledge-how (Tardif, Wellman, Fung, Liu, and Fang 2005). Knowledge-how appears to be more opaque, or it requires different skills to identify, than knowledge-that. This is consistent with Ryle's view, which is that knowledge-how is more complex and heterogeneous than knowledge-that.

Ryle does not privilege “conceptual” over “causal” accounts of behavior, as Fodor and Dennett say. He does not reject scientific accounts of behavior, nor does he minimize or devalue their efficacy. Rather, Ryle’s aim is to map out the logic of psychological explanations, a project he refers to as “philosophical psychology”. He argues that psychology proper is a mixture of causal and non-causal explanations, and that it can only pave the way for “the establishment of precise functional correlations or causal laws”. This suggests that psychology may one day be replaced by a more rigorous science of human behavior. Far from showing bias against science, Ryle embraces its potential.


Sprachlogik
On Family Resemblance Concepts

Diachronic family resemblance concepts?

It would seem that the concept of a particular person can't be a family resemblance concept in the sense of many different things falling under it ... because at most one thing does fall under it. But if we consider the individual through time, we start to see the possibility for something like the family resemblance idea applying to the concept of a particular person.

This gets at several crucial Aesthetic points all at once.

It is not just throughout time that we can be treated to the overlapping similarities of a single object. There is something poignant (thus salient) about that kind of thing in the case of persons, but persons are not the only sui generis objects which get to enjoy having a concept all to themselves. Presumably artworks too are like this (whether or not they have anything further in common with persons). Margolis, Wolterstorff and Levinson have together given us as much torturous 'ontology' and alphabet-soup 'general metaphysics' of artworks as I personally can tolerate. The Wittgensteinian view, if I am understanding it at all correctly, seems to invite us instead to remember the time before we ever became concerned to establish precisely how many wrong notes it takes to turn a symphony of Beethoven's into some other, ontologically discrete art-entity.

This may be the simplest matter of revealing a hitherto uncodified practice: i.e., we really did recognize the piece as Beethoven's, in spite of the mistakes, without being given the answer in advance; the music we heard on that occasion, mistakes and all, really was operant in our (later) conceptualization of the piece in question. Case closed.

But things are not usually so simple, for we probably were 'given the answer' ahead of time, e.g., by a concert program, liner notes, or an 'announcement' either from the stage or (perhaps uninvitedly) from someone seated nearby; and presuming we took this information to heart, this essentially forces upon us whatever the result of the performance turns out to be. In some sense, the drawing of distinctions now really has come before use of the expression. In the turgid language of the Art-Ontologists: The performers issue an ontological stipulation before the token is generated; this stipulation is borne of intent; but intent is cheap even when it is sincere, for the criteria for fulfulling an intent thus stipulated are different from the criteria by which a listener (of any level of sophistication) would 'blindly' identify the work.

Now let's turn the tables: Say we have a digital recording of the symphony and some serviceable (and durable) audio gear on which to enjoy it again and again. I find the supposition that we will have the same experience each time too laughable to bother with; but this seems to be exactly what is assumed by whatever 'aesthetic realists' Carroll purports to be summarizing in Ch. 4, Pt. II. If we presume that the stimulus does not change but the 'experience' somehow does, then obviously we are the ones who are changing. That is too vague, admittedly, to permit of much development; I want only to insist that it follows unavoidably from the other given conditions, and that it represents the antirealist steelman to Carroll's various strawmen.

We conceptualize a person as at most one thing   falling under some name-heading; but if we had to match our snapshots inductively to some arbitrary pregiven, the way a music professor 'drops the needle' and the music major tries to identify the piece, the results would, I'm guessing, be closer to a bell curve of hits and misses, as I presume it is in these classes too. If not for printed programs, would you always know what piece was being performed?

(A silly but real example: A while back, someone who knew me very well happened upon this blog and was absolutely flabbergasted. That's a pretty good example right there. How well did he 'really' know me? Truly better than most, but somehow he either wasn't privy to or was oblivious to anything that would have suggested how many words I've churned out over the years, in addition to all the notes. Meanwhile, my current bandmates accuse me of 'writing manifestos' because they see me hunched over this laptop on almost every break. Which of these people am I? Am I one person? Two? Am I infinite? Bounded? Who cares? But yeah, maybe getting a taste of the Wittgensteinian elixir is really no more complicated than provisionally making yourself into the object. For better or worse, that is pretty easy to do nowadays.)

(The digression on 'ontology' spoils a number of Margolis anthology posts that have been completed for a while but which I've been holding back due to getting sidetracked by Carroll. Oh well.)


Philosophical Investigations
Wittgenstein's Toolkit

Speaking from personal experience, it’s tempting to gloss over §§1-133. ... it’s easy [to] think, “When do we get to the sexy stuff? When do we get to rule-following or the private language argument? ... ” But the discussion of the “sexy stuff” flows directly out of the approach and methodology developed in the book’s opening quarter. If you don’t have a clear grasp of that then it’s going to be next to impossible to see the later arguments in their right light. (For now I shall merely flag up the additional point that you probably can’t get a clear grasp of §§1-133 until you’ve read the rest of the book.)

I would assume as much. Of course what I'm doing here is a bit chancy. What can I say? Something is telling me not to go directly to certain primary sources, to start with secondary sources instead, in certain cases. I already felt this way, but now I've come across more than one self-report of complete paralysis following a first encounter with the Investigations: one reader reports a three-day-long 'nervous breakdown', another that he could not write for a year and a half. We're all the heroes of our own story, so I tend to think that a transcendent reading experience would have the opposite effect on me, that I would write constantly for a year and a half. But who knows. That wouldn't be the best thing either. Perhaps more to the point: I have the Millennial Plague of distractibility and impatience that makes 'close reading' elusive.

Moving right along, here is another able summary of a suitably dangerous-to-dabble-in topic:

Wittgenstein’s New Philosophy: A “No Theory” Theory?

According to Wittgenstein, the bewitching allure of metaphysics ... has led to a collection of a priori theories concerning how the world must be, but such theories are founded upon illusions brought about by misunderstandings of, and misrepresentations of, our forms of expression. The result has not been to create a body of philosophical knowledge but to generate a series of seemingly intractable problems – ...

Thus philosophy is left with the task of revealing the illusionary nature of these problems. They are not to be solved but dissolved: ...

  […] we may not advance any kind of theory.
  There must not be anything hypothetical in our considerations.
  All explanation must disappear,
  and description alone must take its place.

Philosophical Investigations §109

... we will be “marshalling recollections for a particular purpose” (§127). And the results will not be new facts about the world but a clearer understanding of what we already know, ...

This, basically, is Wittgenstein’s New Deal for Philosophy, as set out in §§109-133. ... [but] I think it’s fair to say that the philosophical community as a whole has declined to take Wittgenstein up on his offer.

... For now, ... : how the hell can you do philosophy without theories?

... When, for example, Wittgenstein says “the meaning of a word is its use in the language” (§43), isn’t that a theory? ... [Does not] any general statement (eg, “cats are more intelligent than dogs”) [make] a theoretical claim about the world? ...

... not every general statement can meaningfully be called theoretical. A theory operates in an area of contention – ... So “cats are more intelligent than dogs” could be considered theoretical because it is by no means obviously true. ... On the other hand, “giraffes have longer necks than swans” is not (for us) theoretical. ... Likewise, it is not theoretical to point out that “losing my mind” is unlike “losing my hat”. ...

... an important distinction ... The non-theoretical status of the giraffe/swan statement is a posteriori; it is a contingent truth that we have established about the world. We could imagine a situation where this truth was still up for grabs, and there the statement would be theoretical. ...

The mind/hat statement, on the other hand, ... does not remind us of an established empirical truth, but a grammatical one. We didn’t discover it by encountering hats and minds; we learnt it when we learnt our language, and the truth it expresses partly constitutes what minds and hats are. The only way it could be “up for grabs” would be if someone didn’t know how we use the words “mind” or “hat”. And that person would require linguistic instruction rather than new empirical facts. In this sense, the mind/hat statement is a priori. ...

Since these grammatical observations are not theoretical, it is possible to derive general statements from them which are also not theoretical. Such statements ... do not rely on deduction and do not express hypotheses. They can be verified, not by experiment, but simply by looking and seeing whether they correctly reflect the established facts.

... “A name means an object” is a dogmatic expression of an a priori theory ... By contrast, “The meaning of a word is its use in the language” asserts what we will all admit to be true if we look carefully at our forms of expression ... Wittgenstein does not say it must be so (indeed, he explicitly says it is not always so), merely that – most of the time – it is so. And it is precisely this sort of statement he has in mind when he says, “If someone were to advance theses in philosophy, it would never be possible to debate them, because everyone would agree to them” (§129).

...

A philosophical theory looks like it’s making an a priori claim about how the world must be, but actually its a priori nature comes from its use of conceptual rules. Determinism, for example, ... “every event has a cause” is not an empirical fact ... ; it is a conceptual precondition for certain types of activity – scientific investigations, for example. ... That is the nature of its “must”. It does not, however, guarantee anything about what is or isn’t the case.

It is illicit, therefore, to move from “every event has a cause” to “free actions do not exist” because that is making an unsanctioned existential claim about the world. What you can do, however, is examine the conceptual underpinning of “free will” together with the concept of causation to see how they relate to each other. ... [Here] everything remains at an a priori level. It does not ... save the theory by getting rid of its mistakes – it gets rid of the theory. That is because we are no longer deducing what must be the case, but consulting the rule-book to see how things are. ...

Wittgenstein’s rejection of philosophical theories is not based on the notion that they’re unlikely to yield results. His argument is that they cannot yield results ... [This] flows directly from Wittgenstein’s ideas about meaning as use, language-games, the nature of rules, and family resemblance concepts. If those are accepted then ditching theory is mandatory, not optional.

Finally ... [this proposal] is also not itself a theory. It is a proposal offered as the only way of avoiding the endlessly repeated mistakes of the past ... The price to pay consists in renouncing philosophy as a heroic endeavour – one where the next great mind might finally hit upon the correct theory and explain things to everyone’s satisfaction. ... Instead it would be a more humble matter of “marshalling recollections for a particular purpose”. ... ...


Philosophy by the Way
Wittgenstein and Photography

In the mid-1920s, Wittgenstein did some photographic experiments with the help of his friend Moritz Nahr, a court photographer. One of these experiments was making a composite photo of three photos of his sisters and one of himself. This photo is said to be the start of the development of his ideas of language game and family resemblance.

Hmm. I don't care, but I do want to see the photo, and purportedly this possible to do. So, thanks for that.

Maybe AI can make me one of Noël Carroll and Gregory Peck . . .


In the Space of Reasons
Pickering on family resemblance

  "What, for example,
  is the concept implied by
  (square root of) 9,
  the chair I’m sitting on,
  Beijing, and the Mona Lisa?
  What is lacking from this set of items
  is any sense that they belong together.
  There is a need for a criterion of coherence."

... There seems no reason to preclude this arbitrary list as instancing a general concept ... Let’s call it: Pickering’s example of random things. We can think of the list under that generality. Under counter-factual conditions, different objects might have occurred to him.

This is precisely where I landed with Bambrough's 'alphas'. It seems that any such attempt to conjure pure incoherence in list form is bound to upstage itself.

I presume the point (one point) of W's enumeration of games and features of games is that this exercise too produces "coherence" quite in spite of itself: Namely, it produces a specious coherence not in contrast to [So-and-so]'s example of random things but rather quite in the same vein. Coherence sounds expensive, but it turns out to be cheap. Every linguistic homestead has a formidable backyard furnace for churning out coherence by the pound. What is elusive is basic fidelity to the a priori level, to the rule-book as it is rather than how we might theorize it to be. This too should be easy. Why should it be difficult?

Beyond all of that lies the problem of counter-factual conditions. Perhaps this is a convenient tactic for diagnosing 'specious coherence', should the listmaker actually happen to remake the same list on some future occasion; but this, for one thing, would remain a fact about them, not about anything bigger than them; and, moreover, failing such developments in actuality, isn't this presumption really just one more theory? i.e. I want to ask: What must be established about "conditions" in order for this presumption to become one which we will all admit to be true if we look carefully at our forms of expression?

I would think that this is exactly what "conditions" are vis-a-vis what happens to come to mind in such tasks, and vis-a-vis the whole obverse side of 'naive practice' too; and I would think that our susceptibility to "conditions", so to speak, is far 'deeper' than is our ability to notice and codify them. So, among other things, I resist all theories of 'reflection' in such matters, because there is no such thing as fidelity to an unnoticed referrent: to know what the transformative relationship is, we would have to be privy to both sides of it, so as to then deductively isolate the vector of transformation. Otherwise, how can we simply assume that the transformation is nothing more nor less than a 'reflection'? Is the sheer persistence of this term itself something at which to look and see without theorizing? It seems to me that it is itself the very epitome of a "theory", which is why I've started to speak generally of 'reflection theories'.


Aesthetics Today
Margolis on Defining Art

Although he makes a tough read sometimes, and although it is a bit hard to take his self-certainty, I still think Joe Margolis is one of our best aestheticians.

Uh-HUH.

His “The Importance of Being Ernest about the Definition of and Metaphysics of Art” (Journal of Aesthetics and Art Criticism 68:3, 2010, 215-223.) ... His strategy is to argue that Weitz has misunderstood Wittgenstein’s “family resemblance” concept. ...

Umm-HMM.

... Margolis allows for accounts of art that are both realist and essentialist and also open to revision and reinterpretation. He also allows that the great philosophers’ attempts to define art or its genres are not worthless. Nor are they just disguised theories of evaluation, in the manner that Weitz suggested. ... He argues for instance that there are many different kinds of definition, and that art can be defined for a special purpose.

I think W and the W-ians' were plenty hip to that!

Perhaps this article should be included in the present roundup. Beyond all above, the mere fact that it postdates Carroll's book makes it interesting.

The guiding questions for that annotation are already obvious upon a first skim:

(1) Declare your purpose, bro! Whom among the definers has been transparent as to the purpose of their effort? Why don't we ever get to know what it is?

(2) Does anyone who is not a professional philosopher have such a purpose? Anyone could seize upon any old "purpose" to guide a definition, but presumably not everyone does; and presumably that is because they don't absolutely have to. So, what are the actually important art-purposes which require a definition to be hammered out in their own image?

(3) The positing of a purpose-definition nexus itself passes philosophical muster, but in this context it still seems like a bit of a dodge. Are we to assume that an all-purpose definition still cannot be found? Is it not thus being conceded here that the 'concept' of art as we use it indeed can at best be looked at and seen but not 'defined'?


Histories and Theories of Intermedia
Art After Philosophy (1969), Joseph Kosuth

Presumably this is the Kosuth article mentioned by Margolis in the above-mentioned paper. He makes some interesting claims in regard to it. He does not CITE it, however, nor does this widely-anthologized article appear in the Anthology. Our Guy Joe philosophers hard.

Per Wikipedia, the article

was, for the twenty-four year old Kosuth that wrote it, in fact more of a "agitprop" attack on Greenbergian formalism, what Kosuth saw as the last bastion of late, institutionalized modernism more than anything else. It also for him concluded at the time what he had learned from Wittgenstein - dosed with Walter Benjamin among others - as applied to that very transitional moment in art.

(31 Jan 2026)

Vintage wikispeak, this. We shall see what, if anything, "he had learned".

There is another hit on this blog much, much further down the search . . .


zoran rosko vacuum player
Bernard Suits - whimsical presentation of ideas about games, language, and utopia, it sparkles with wit and fun; and outranks those wonderful works in clear, firm philosophical conclusions. He defines gameplay as "the voluntary attempt to overcome unnecessary obstacles"

But really folks, let's have some theory, shall we?

To play a game is to attempt to achieve a specific state of affairs [prelusory goal], using only means permitted by rules [lusory means], where the rules prohibit use of more efficient in favour of less efficient means [constitutive rules], and where the rules are accepted just because they make possible such activity [lusory attitude]. I also offer the following simpler and, so to speak, more portable version of the above: playing a game is the voluntary attempt to overcome unnecessary obstacles.

I rather like the portable version. I confess that I like it enough to feel just a twinge of longing for the heroic endeavor of firing inductive buckshot at seemingly intractable problems, hoping to finally hit upon the correct theory.

In more lucid (less ludic) moments, I can't deny that the combined weight of the above expositions seems more likely to win my philosophical loyalty in the end. Per those expositions, presuming I half understand them, this whole notion of clapping back at LW using his own example completely misses the point, and unfortunately there seems to be plenty of that going on. (I have omitted most of those posts here. The Specter of Reason post, meanwhile, is priceless because it is a collection of well-conceived rejoinders to precisely this effort to 'clap back' at LW. For the time being I'm content to refer all such efforts to those rejoinders and move on.)

Just in case: The notion of unnecessary obstacles is fruitful and poignant, but, on 'existentialist' grounds, it is the weakest part of the definition. For one thing, we need the opponent so badly that we'll invent him if he doesn't already exist. For another, he usually does exist (or someone already invented him). Perhaps in Jacksonville, say, the Packers are an "unnecessary obstacle". In Minneapolis, meanwhile, the entire state of Wisconsin and most of its people are literal obstacles to all kinds of things. The expression given this inconvenience by games is superfluous and hardly does it justice.

Lusory attitude The attitude of the game player must be an element in game playing because there has to be an explanation of that curious state of affairs wherein one adopts rules which require one to employ worse rather than better means for reaching an end.

Along similar lines as above, I would question whether game players adopt  rules. Only in the sociologist's notoriously reified sense can this "adoption" of rules be rendered as some kind of act.


Practicum: Critical Theory, Religion, and Pedagogy
Rethinking Classic Texts/Theorists: Ninian Smart

Revisiting Ninian Smart’s Call for Worldview Studies

Ann Taves, University of California, Santa Barbara

...

Smart’s dimensional approach to the study of religion emerged from his family resemblance view of religion. As he stressed, viewing religion in terms of family resemblance placed it [on] a continuum with other phenomena. Here is a typical quote:

The study of religion is without clear cut boundaries, for it is not possible or realistic to generate a clear-cut definition of religion, or, more precisely, any definition will involve family resemblance, as indicated by Wittgenstein. Such a definition would involve listing some typical elements of religion, not all of which are to be found in every religion. It is a natural consequence of this that there will be some phenomenon which bear a greater or lesser resemblance to religion.

... in contrast to Eliade, who emphasized the sharp distinction between the sacred and the profane, Smart sought to apply his dimensional analysis to systems that, as he said, are “commonly called secular: ideologies or worldviews such as scientific humanism, Marxism, Existentialism, [and] nationalism.”

Yet it is probably his openness to continuity and his inability to establish a clear distinction between religious and secular worldviews that has generated the most concern. ... Brian Rennie notes Smart’s reluctance to characterize secular worldviews as religions, his uneasiness when it came to specifying what made religions distinct, and at the same time Smart’s claim that “the washing away of a fundamental distinction between religion and secular worldviews enables us to ask more sensible questions about the functions of systems of belief.” “Try as he might,” Rennie concludes, “it seems he cannot effectively maintain a distinction between a religious and a non-religious worldview.” Although it is easy enough to ask people whether they consider their worldview religious or not, establishing a theoretical distinction between secular and religious worldviews requires scholars to stipulate a definition of religion. Smart sometimes stipulated a distinctive feature (i.e., contact with an invisible world), but then undercut himself, creating contradictions that he never resolved.

Well, it seems there is no family resemblance view from which an approach to the study of this or that only later emerges. Instead, the study of religion, e.g., would begin by 'looking' at everything that we call religion and 'seeing' (or not) what all is there.

If we are predisposed or otherwise prompted by the material to undertake listing some typical elements, then of course we may do so. But there is no reason to presume that the concept will prove to be a 'family resemblance' concept, and no reason to hope that it can be made to look like one, because not all concepts are of this kind.

If so, then Weitz, e.g., is on solid ground only insofar as the various Theories of Art are themselves look-and-see endeavors: only then can the divergences in these theories be reclaimed as descriptions. This I very much doubt, foremost because Weitz himself parses these efforts as 'criticism.' Looking at criticism is not looking at art, nor can looking at criticism be (responsibly) considered looking at (responsible) descriptions of art. Criticism, rather, is motivated description. Perhaps an aggregation of motivated descriptions is, in a spurious sense, more comprehensive than one or another could be; still, this is inadequate to shed any light on what 'art' is.

Look and see . . . but don't stare! That is impolite.

Now, does this amount to

misuse of the notion of family resemblance?

(Carroll, p. 226)

All that it is, I think, is a failure to prove (1) that a real definition cannot be found, and (2) that art is a family resemblance concept. Fittingly, all of that remains just a theory.

These are serious objections to Weitz, but they permit no suspicion that he "misuses" the term. (A Carroll-ish gloss on this: if Weitz misuses the term, then his argument is an argument about some other term(s); in which case our objections to the given term are as misdirected as is the given term itself.)


Now, presumably certain continuities and discontinuities may become impossible to ignore. Sports fans, e.g., might notice (as many have) that 'college football is a religion in the South'. The 'common properties' are obvious.

Meanwhile, on the other side of the ledger, plenty of Baptists and a few Pagans even can 'see' only the differences between college sports and religion. Perhaps these are small differences, as Freud said, but they too are real and obvious.

All of this is a hint that language can fail us in the most basic way, far more basic than philosophers see fit to concern themselves with. There is no law saying that people have to be sincere, intelligent, disinterested or sober while they are talking about football, or about religion.

It's more believable that furniture and chair reveal the structure of categories than that religion and football could. Standardly, the latter pair just are more tenuously linked than the former; they are more like penguin and bird than like sparrow and bird. Moreover, everyone refers to sports as . . . 'sports', and to religion as . . . 'religion'. Why? As Bambrough might have put it, because that is what they are. One is for Saturdays, the other is for Sundays. Some people are divalent, but the words are not.

To call sports a religion is to have a theory, to give an explanation. It is hypothetical. One concept is 'explained' in terms of another. I happen to buy this explanation in this case, but perhaps what is really needed is an über-concept under which both 'religion' and 'sports' fit (and into which, likely, will 'fit' a number of additional things). Some people may insist that a bean-bag chair is not furniture, but they cannot dispute that it is stuff in the living room.

Now we have ceased drawing arbitrary boundaries and commenced outflanking them.

Dare I say, the concept that would occupy this über-void happens to be precisely the concept that Rank converges upon in Art and Artist; but here too, I reach the end of the book (twice now) convinced of the argument but unable to satisfactorily name the concept. Why? How?

I only recently discovered that Francis Fukuyama names it perfectly and precisely, borrowing from the Greeks: it is Thymos, i.e., "the seat of pride, honor, and the desire to be recognized as a worthy individual". (Google) I think this is an excellent explanation for almost everything we feel the need to better understand about those phenomena, and doubly so about the people implicated therein. Just a theory!

What enabled Rank and Becker to uncover this? They looked hard at what people do, not what people say (or worse, what people say they do).

There are ... longstanding complaints regarding stipulative (2nd order) definitions of religion, chief among them that (1) they vary so much that we can’t compare what one scholar says about “religion” with what others say and (2) they tell us more about scholars’ views than about the views of people on the ground. ...

... in an article titled: “The Philosophy of Worldviews, that is, the Philosophy of Religion Transformed.” There [Smart] argued not only that “the philosophy of religion should be extended to be the philosophy of worldviews,” but also “that [the philosophy of worldviews] should be the upper story of a building which has as its middle floor the comparative and historical analysis of religions and ideologies, and as a ground floor the phenomenology not just of religious experience and action but of the symbolic life of man as a whole.” ... however, ... In starting with an idea of religion, however vaguely defined, and using it to analyze worldviews that he wanted to characterize as either religious and secular while remaining reluctant to identify what distinguished them, he remained suspended between religious studies and a more fully realized worldview studies. The alternative is to take worldviews that people on the ground characterize as religious, secular, spiritual (or whatever) as our object of study. Doing so requires us to define “worldviews” but not “religion.” ...

Solid outflanking here.


I do rather like the “No Theory” Theory and the proposed supersession of explanation by description.

What on earth is Danto 'looking' at, e.g., when he purports to 'see' an Artworld constituted by theory rather than vice versa?

these days one might not be aware he was on artistic terrain without an artistic theory to tell him so.

('The Artworld', in Margolis, p. 155)

What is Carroll looking at?

"How do you go about analysing concepts? ... there is substantial debate about what concepts are and how to analyse them. However, there is one very standard approach ... the method of necessary and sufficient conditions. ... Although this method is controversial, we shall presume its practicability ... , if only because it is a powerful tool for organizing and guiding research, even if ultimately it rests on certain questionable assumptions."
(7)

I may now ask: "research"? Or 'theory'? They are not the same. 'Description' requires no such "guiding" and the results can always be (re-)'organized' after the fact.

And the above "method"? This you call "organizing"? It's more like spraying buckshot.

Carroll is at it again in Ch. 1:

"what property or properties do Duchamp’s readymades possess that their indiscernible counterparts lack? Here, the neo-representationalist advances the tantalizing hypothesis that Duchamp’s readymades possess aboutness, whereas their ordinary, perceptually indiscernible, real-world counterparts do not.

  This argument is what is called an hypothesis to the best explanation—that is, neo-representationalism offers the best explanation of why we make the awesome categorical distinction between Duchamp’s readymades and ordinary real things ... ; and inasmuch as neo-representationalism is the best explanation, we have good inductive grounds for accepting it.
(27-8)

The effort is indeed "tantalizing" (read: heroic) precisely because it is an "hypothesis". But evidently we don't have to guess that there is such an "awesome categorical distinction" in the first place. Rather, Carroll takes it for granted that anyone can 'see' the distinction, even if they cannot literally 'see' the basis for it.

I find this odd in the context of an introductory text: at first glance it seems that he has assumed that which he intends to prove. But hand to heart, I too would say that readymades are artworks, I can't say exactly why, and I have no appetite for litigating art-ness in particular cases. So, for someone like me, Carroll has gone halfway towards abandoning "theory" and then turned back: he respects established categories to the point of taking them for granted, yet his larger project is to try to explain them (via "an hypothesis to the best explanation") rather than to 'describe' them.

Perish the thought, then, that this (or any) purported "categorical distinction" may indeed prove to be spurious. When Wittgenstein says that 'nothing is hidden', he assumes sincerity and disinterest, two things that can never be assumed in the Artworld. Consider the initial disbelief at celebrity wrongdoing: clearly we 'categorize' our culture-heroes as 'good people', though the only 'good' deeds we know for certain are those trivial, parasocial ones they've done for us ourselves. Much celebrity life is hidden, simply and deliberately. We cannot categorize information that we cannot access. All realism about categories and properties assumes a symmetry of information which is scacely possible in many domains. Anywhere it is possible for agents to control information, category formation is affected. The un-philosophical but necessary addendum is: ' . . . and if they can, they will.'

Now, why should we consider "aboutness" to be a good explanation, let alone the "best"? We should first have to know what it purports to explain. In this case, we are trying to explain an "awesome categorical distinction" by way of a "property or properties". We do not have to guess at the readymade's properties! Properties can be described! Even if they cannot (literally) be seen! The problem, as Carroll seems to know, is that objects have infinite properties; hence not only can description never be complete, but it could be mind-numbingly extensive yet without naming a single art-relevant property. To this extent, we do, after all, have to know, already, what we are looking for before we start to look for it; that is to say, if it is art-properties we are interested in, we had better know what those are, and that is already to know what art is. So, per above, we can outflank the question by moving 'up a level' to some over-concept which would include our mark as an under-concept; this requires us to define the first but not the second. Hence Danto (and, presumably, Carroll-channeling-without-naming-Danto here): art is about something; how is it about what it is about? However, the problem with aboutness is that it is either intentionalist or subjective (or perhaps both at once) and therefore anarchical. The whole point of stipulating an over-concept here is to reign in the anarchy attaching to those concepts which seem to have 'nothing in common other than that they are called by the same name'. But aboutness instead invites enumeration of even more (infinitely more) that is not shared among the cohort.

How then to 'describe' In Advance of a Broken Arm without lapsing into theory? What is it, anyway? Do Wittgenstein et al mean this sort of question really to be: How is the work spoken of 'in the language'? That is not too difficult to get started with, since so much has been said: It is a readymade; a snow shovel; an artifact; an item of property; an attraction; a commodity, incl. a cash value; a phantom; it has one original, which is lost, and a few replicas, which exist. Most infamously for philosophical purposes, it is both an object and an idea. That problem alone has kept us busy for a century.

I have tried to shoot straight with these descriptions; but again, bias is not just psychic or ideological, it is above all circumstantial. Presumably there remain pockets of language users for whom some or all of this is not so. They may dispute 'artwork'. May they reasonably dispute 'readymade'? 'property'? 'idea'? Somebody somewhere surely will, and I'm not sure how to deal with that. Everyone gets a somewhat different view. This leads into such descriptions of a Duchamp readymade as: a provocation; a ruse; a declaration of (culture) war; a stroke of genius. I imagine that more of these sorts things have been said than any number of objective-but-boring observations about, say, the width of the shovel blade. If so, it's because the latter kind of description doesn't allow us to do what we find most interesting. This is a bit like publication bias, where "what we find most interesting" is positive results rather than negative ones. The whiff of scandal tends to dominate. But the charge of 'ruse', for example, really is impossible to resolve by way of "description." It is itself already a description; what is really at issue is whether or not the thing it describes is real.

This case, therefore, is nothing like Wisdom's dog going 'round the cow.

People ask, 'If when a dog attacks her, a cow keeps her horns always towards him, so that she rotates as fast as he revolves, does he go round her?'
...

There is at once an inclination to answer, 'There is a sense in which he does and a sense in which he does not go round the cow'. But this is untrue. There are not in English two uses of 'go round' in one of which the answer is 'Yes' while in the other it is 'No'. Had there been, the question would hardly have produced difficulty. But the answer is not useless because it brings out how easily there might have been a use of language in which we should have had an answer ready, and thus hints that the question is a matter of language.
...

One should say: You speak as if you are asking a question about the dog and the cow. But you know the facts about them.
...

In asking me this question you are treating me like a judge of the High Court who is considering a question of law not of fact, ... Now I can of course give a decision if that is what you want. But you want more than that.

(Metaphysics and Verification, pp. 95-96)

In charges of 'ruse', or of 'genius', it is precisely the facts which are in dispute. 'Aboutness' too is an innuendo, only without the deprecatory valence of 'ruse' or the laudatory valence of 'genius'. All are 'theories', dressed up with nowhere to go, awaiting the arrival of some 'facts' to take them by the hand.


Edward Feser
Della Rocca on PSR

"A terrific writer", says one blurb in the subheading. My first instinct is to disbelieve such things, but in this case it turns out to be true.

The writer 'named the blog after himself'! See comments here for unsurpassed wisdom on this from Messrs. N.N. Taleb and P. Goodman. Fbfw I got hip a little too late.

Yes, I usually have trouble getting excited about philosophy that is quite this abstract, but maybe that's just because this kind of thing is rarely presented as well or as clearly as it is here. Bravo for that most of all.

The principle of sufficient reason (PSR),
in a typical Neo-Scholastic formulation,
states that

“there is a sufficient reason or adequate necessary objective explanation for the being of whatever is and for all attributes of any being”

(Bernard Wuellner, Dictionary of Scholastic Philosophy, p. 15).

...

Among the arguments for PSR I put forward ...
are
a retorsion argument
to the effect that

if PSR were false,
we could have no reason to trust the deliverances of our cognitive faculties,
including any grounds we might have for doubting or denying PSR;

and
an argument to the effect
that

a critic of PSR
cannot coherently accept even the scientific explanations he does accept,
unless he acknowledges that there are no brute facts and thus that PSR is true.

"In philosophy, retorsion (from Latin retorquere, to twist back) is a method of argument, akin to a reductio ad absurdum, that turns an opponent's claim back against them by showing it leads to a self-contradiction, meaning the claim's own truth makes it impossible to state or hold consistently. It's used to refute positions like radical skepticism or relativism by demonstrating that anyone adopting such a stance (e.g., "there is no truth") would be unable to logically assert that very statement, ..."

(Google AI Overview, 18 Dec 2025. Query: 'what is retorsion in philosophy?')

... An explicability argument [hereafter "EA"] ... is an argument
to the effect that

we have grounds for denying
that a certain state of affairs obtains
if
it would be inexplicable
or a “brute fact.”

Della Rocca offers a number of examples ...

When philosophers employ inductive reasoning
they are essentially
rejecting the claim
that
the future will not be relevantly like the past
nor
the unobserved like the observed,
on the grounds that
this would make
future and otherwise unobserved phenomena
inexplicable.

...



... suppose we apply the EA approach
to the question of why things exist.

Whatever we end up thinking the correct answer to this question is ... if we deploy an EA in defense of it
we will
implicitly
be committing ourselves to PSR, ...
because PSR
just is
the claim that the existence of anything
must have an explanation.

...




... The implication seems to be
that
we can have no good reason
to think anything is explicable
unless
we also admit
that everything is.

...

Even if the critic of PSR decides to reject the various specific examples of EAs cited by Della Rocca ... the critic will still make use
of
various patterns of reasoning he considers formally valid or inductively strong,
will
reject patterns of reasoning he considers fallacious,
etc.

And
he will do so precisely because
these principles of logic embody standards of intelligibility or explanatory adequacy.



To be sure,
it is a commonplace in logic that not all explanations are arguments,
and
it is also sometimes claimed (less plausibly, I think) that not all arguments are explanations.
However,
certainly many arguments
are
explanations.
...

Arguments to the best explanation are explanations, and
as Della Rocca notes,
inductive reasoning in general
seems to presuppose
that things have explanations.

So,
whenever Carroll skews inductive,
we can presuppose
him to presuppose
that there is an explanation for, e.g.,
the awesome categorical distinction
between a readymade and a shovel;
that the fact of distinction
is not a brute one.

But,
Wittgenstein says,

All explanation must disappear,
and description alone must take its place.

Philosophical Investigations §109

"Wittgenstein" appears only in the comments to this post. Allegedly there are 595 of them. I have yet to Load More. . . and there are 57 hits for "Wittgenstein" on the page.

...




the notion that the PSR is a relic, long ago refuted, is a mere prejudice that a certain kind of academic philosopher stubbornly refuses to examine.
...

He “already knows” there must be something wrong with it, because, after all, don’t most members of “the profession” think so?


...



COMMENTS

Robot October 11, 2014 at 8:56 AM

...

Could someone spell out why an atheist supposedly would have an issue with psr






Tyrrell McAllister October 11, 2014 at 9:41 AM

@Robot

Feser makes the argument here:
http://edwardfeser.blogspot.com/2011/02/can-we-make-sense-of-world.html

"[I]t is very likely that an atheist has to hold that the operation of at least the fundamental laws that govern the universe is an 'unintelligible brute fact'; ...

The reason ... (arguably) ... is that to allow that the world is not ultimately a brute fact

-- that it is intelligible through and through --

seems to entail that there is some level of reality which is radically non-contingent or necessary in an absolute sense.

And that would in turn be to allow ... that there is something which ... is pure actuality and ipsum esse subsistens or 'subsistent being itself'

-- and thus something which has the divine attributes which inexorably flow from being pure actuality and ipsum esse subsistens.

Hence it would be to give up atheism."

"Ipsum esse subsistens ... means "subsistent Being itself" or "existence itself subsisting," describing God as Pure Being, where essence and existence are identical, unlike creatures whose essence (what they are) is distinct from their existence (that they are)."

(Google AI Overview, 18 Dec 2025)

I'm almost convinced. But I don't understand what is inexorably divine about pure actuality.

What is necessarily remarkable about this "pure actuality"? That it is "divine"? Is that just what those two terms mean? They mean each other? Probably not.

Why shouldn't "pure actuality" instead be the least remarkable, most mundane concept we can imagine?


Lessons in Psychology: Freedom, Liberation, and Reaction
Through-lines: What Makes Something Descriptive Psychology?

A through-line description is, paradigmatically, the description of a non-contiguous sequence of a person's courses of action as having a shared significance. For me, that's sufficient formulation.   Greg Colvin

Through-lines serve as a manageable unit for identifying significant and recurring themes in a life history. A through-line is a stable feature of personality.

I am proposing that the through-line concept has a useful place within the Dramaturgical Model of Descriptive Psychology. This claim has created controversy and goes to the heart of the question of what makes something authentically Descriptive Psychology.

Concepts, Practices and Subject Matters

What gives a subject matter its integrity? How do people recognize that they are working in a common intellectual endeavor? ...



The philosopher Stephen Toulmin suggested that to understand what actually counts in a science, look at what the various communities of scientists do rather than focus on their theories.

What people do is a matter of the conceptual distinctions they have available to guide their actions. Concepts define the actions of the community that uses them.

This implies that a subject matter is defined primarily by the policies that indicate when and what concepts are employed. An intellectual community's policies and practices point to what is significant and essential to their subject matter.

Yes.

In a combined fit of rage and moment of clarity, I once wrote:

"As a layperson, blissfully free of "constructivist" rigor and refinement, one can always make what one will of science, as of statistics, as of art. The trick, though, in each case, is for communities as communities to make the right thing of it.

"To learn to do this (or, channeling Feynman, to learn what one must learn in order to do this), the really urgent question is: How is any given piece of science to be
politically operationalized? Laypeople cannot hope to wade into any methodological briar patches, and thankfully, once this point about operationalization is understood, they do not need to. The political operationalization of science is something anyone can understand. Provided one can catch a glimpse at the operationalization, it is superfluous to examine the methodology, which will soon be changing anyway."

At the 2000 meeting of The Society for Descriptive Psychology, Ossorio said:

“The Dramaturgical pattern is based on the model of a social practice. It’s an episode. And that’s what human life consists of, this kind of episode. The discussion of self-concept depended on that, that you live your life not just engaged in this Deliberate Action followed by another one followed by another one. What you’re living is meaningful patterns of Deliberate Actions. And the closest approximation we have is a social practice. So that’s what I mean by a dramaturgical pattern. You have to have that kind of history, not just a history of Deliberate Action.”

Whatever works, works; but I think there's an obvious flaw in all such analogies, and that it's a flaw that actually matters.

Clearly life is not just 'one thing after another.' It is not that from the perspective of the one living it, certainly; and seldom is it that (I hesitate to say it never is) from the perspective of any onlookers.

The problem with metaphors of drama, of theater, or even of 'story' writ large, is that these things too are quite a bit a more than meaningful patterns of Deliberate Actions. What I specifically object to is this use of "pattern." It should then be simple to replace that word with "reoccurrence" or "recapitulation" and move on; but it seems that "pattern" is exactly what is intended here: it seems that the suggestion of geometric regularity, linearity, extrapolatability (what else to call it?), predictability, etc., is part and parcel of statements such as the above.

I rather prefer the first version above: the description of a non-contiguous sequence of a person's courses of action as having a shared significance. I don't know if it is the chronology or the action (or both) that is here said to be non-contiguous, but this seems, either way, to begin to remedy the bruteness of "pattern".

To me, "pattern" in the narrative arts means The Three Little Pigs and The Twelve Days of Christmas. I would call these stories in pattern form, and I would differentiate that sharply from stories that contain patterns. I'm not saying we can't pick out so-called "patterns" in Hamlet, rather that the narrative of Hamlet is not constituted by "patterns". I would think it demonstrable that (most of) what we call 'narrative' cannot emerge at all from (most of) what we call 'patterns'. I haven't thought much about this. I only think about it when I read something like the above, and then I always know exactly what I think.

Ironically, I think this resolves itself by keeping the word "pattern". The word we get rid of is 'story' or 'narrative', or here Dramaturgical. (Do people think it'll be harder to 'get rid of' a word the larger it is?) Here as always, the fear seems to be that the story-concept is the glue that holds everything together; without it, we risk suggesting (not to say actually believing) that life, real or fictional, has all the 'dramatic' unity of a grocery receipt.

I would counter that, as psychologists (professional or, in my case, armchair), any pattern-seeking we do is itself a profoundly anti-narrative maneuver. I would distinguish between linearity and contiguity. Narrative is contiguous. We call narrative 'nonlinear' when the constitution of this contiguity is broken; perhaps this is a coincidence, but I find it meaningful. Nonlinearity is formal; but there can be no 'formal discontiguity' in narrative. So I would argue.

Of course I mean no disrespect here to Ossorio, whom I've never heard of before but have now added to the ever-sprawling To-Read pile. His Wikipedia entry suggests plenty of common ground.

When Ossorio was introducing his use of dramaturgical he was clear that he didn’t need to reinvent the wheel nor did he need to take on all of the baggage that attends dramaturgical in other contexts. And drama, he pointed out, is a conceptual option, more inclusive than “narrative” but, at times, less intuitively on point than “game”.

Hmm. What is more inclusive about it? 'Narrative' is the more general term, of which 'drama' is a subspecies. Maybe "inclusive" means: it includes more specifics.

Anyway, I'm saying that what is needed is more exclusive terms than either of these. Since we will be 'excluding contiguity' from our 'descriptions', it needs also to be excluded from our guiding metaphor(s).


soul searching or just looking for fights
A Revivial of Interest in Cooley

Now charging hard into the double-digit pages of search results, an offshoot browser tab found its way here, to passing mention of Kenneth Burke's 'dramatism' or 'dramaturgical school' of 'symbolic interactionism', and to all manner of related and semi-related passing mentions.

Burke's 'Dramatism', the 'rhetorical theory', has its own Wikipedia page.

Burke is mentioned more-than-passingly by Becker, but in connection with 'victimage' rather than 'drama'. Becker of course is himself a leading exponent of The Drama and presumably borrowed it from several sources, Cooley not least among them. If he was a Jungian Mystic rather than a Rankian Existentialist, this would pose more problems for me than it in fact does. Rank is a theorist of narrative linearity, not of narrative continguity, and, well . . . "although Rank's thought is difficult, it is always right on the central problems, Jung's is not." Jung is riding shotgun in Back to the Future, Rank is not . . .


THE GRØNMARK BLOG
Andre Agassi’s co-written autobiography, “Open”, is a work of comic genius

This is the portion of the program where we discover odd hits for "Wittgenstein" on pages devoted to mainstream celebrities.

I am both a sports fan and a book reader, and I miss no opportunity to signal the two together. Unfortunately I cannot tolerate either in mainline doses, so while I would surely devour this Agassi book, I almost certainly won't.


Museum 2.0
But What About Quality?

Scene: a regional workshop on arts engagement. A funder is speaking with conviction about the fact that her foundation is focusing their arts grantmaking strategy on engagement. Engaging new people. Engaging more diverse people. Engaging people actively in the arts. Any questions?

One, from a museum director. The question that comes up every time, the question so big it deserves the impropriety of all caps: BUT WHAT ABOUT QUALITY?

No one wants to do crappy work. Everyone wants quality, in one way or another.

The word "quality" is often code for aesthetic quality, as judged by a specific set of cultural expectations and preferences.

...

From here, art-critters can mentally complete the post (and the comments) from memory and habit, if they wish . . . and they probably wish not.


Here I want to suggest that we are staring directly at one of quality's uses in the language: When a bureaucrat says "engagement" or "inclusion" or "community" or "growth", everyone else says: BUT WHAT ABOUT QUALITY?

From there, the bureaucrat's only viable move is to call non sequitur on this 'use', as if the suggestion that "engagement" has been put directly in place of "quality" is an incomprehensible bolt from the blue. But everyone's verbal behavior here is consistent with (not to say it proves) perfect mutual understanding; and I presume that to 'look at the use' means looking at the hearer too, though they may not say the word in their response, or they may not respond at all.

It is stretching the meaning-use thesis beyond its limits to suggest that the funder is 'using' a word she never said. It is, in fact, a libel on that person to allege this. But if she reports that the same big question comes up everytime, who is libeling whom?

Enter the stipulative (2nd order) definitions.

Exeunt.


... if they took porn off the internet, there'd only be one website left, and it'd be called: Bring Back The Porn; ...

This is how we know that "funders" do care about quality (sort of), and that they do have quality in mind (sort of) even when it is not among their immediate utterances. The path to engagement is no mystery. Engagement was demystified long before there were any art museums at all. This archetypal funder's rage for engagement is not being held back by imagination nor by technique, but rather by scruple. That is a point in her favor.

What the scenario also shows, however, is that this is the flimsiest kind of scruple, the kind that can be bought off for a price. Quality does matter. Sort of. But her foundation is focusing their arts grantmaking strategy on engagement. This should read: " . . . on maximizing engagement without falling below our minimum standard of quality . . . " Or perhaps we can't be so sure we know what exactly comes after the wishful "without", aside from that it is a wish. But that "without" clause must be there, and some value-orientation must occupy it. Otherwise, engagement is easy, with or without focus and strategy. Scrabble players seeking to 'grow the game' know that if they held Chess tournaments instead they would get more "engagement." These boosterists cry apathy and mismanagement by prior regimes, but really their stagnation stems from their best quality, which is that they have a scruple preventing them from cutting off their nose despite their face.

It can never be certainly inferred just what someone's 'minimum standard' might be; if you want to know in any given case, you'll just have to schlep out there to see for yourself. What can be said? The funder's operational concept of quality is a quality-minimum and is not a quality-maximum. That's not something that has to be accepted or respected. I think it's actually the clearest indication that an art institution has become a tyranny of the weak. Not every uncompromising person is strong, but strong people do demand the best, and they demand it for their entire community, not just for themselves. (Becker: "the hunter who killed the game distributed it with pride and often took the least desirable part of the animal for himself.") The 'tyranny' of the institutional art world is born in confronting a 'community' that is not a community at all but rather a market where art competes with literally every other way a human being unshackled from the bonds of necessity might choose to pass the time; yes, including the most infamous of ways as well as the most exalted.


BeerBrarian
The (Second) Draft Framework for Information Literacy for Higher Education: My Thoughts

The Association of College and Research Libraries (ACRL) has released a revised draft of their Framework for Information Literacy for Higher Education. ...

What I like:

...

What I don't

...

III. Threshold Concepts (TCs). ... Lane Wilkinson has an excellent critique ... He argues, convincingly, that

  • TCs are based off of probable characteristics within disciplines, but probable is not the same as defining.
  • The authors of the Framework assume that students will be transformed and troubled by similar concepts in similar ways, but students are a diverse bunch.
  • Knowledge of concepts does not imply ability(s).
  • Disciplines are contested spaces, whereas TCs seek to cannonize.

Given these critiques, we could attempt to improve TCs by saying that they are like a family resemblance, per Wittgenstein. In this formulation a series over overlapping similarities could make up a group of threshold concepts for a discipline, but creating boundaries might prove difficult, ... Or we could talk about a Latakosian "hard core" for each discipline ...

What if instead of threshold concepts, we used learning outcomes? For example, "An information literate learner should be able to...." or "A metaliterate learner..."? Learning outcomes are less flexible, and as the authors note in their FAQ, less focused on process, but

  • there may be many roads to information literacy, some of which are under-explored and -theorized at present, and
  • if librarians, faculty, and other members of our communities can't agree on what a metaliterate or information literate learner looks like, then we need more robust definitions of those concepts.

Seems to me that should be able to will almost always be the best evidence of learning!

I've never heard of Threshold Concepts before. Following the bouncing ball, here is an accessible account.

"Broadly, a threshold concept is described as ‘akin to a portal, opening up a new and previously inaccessible way of thinking about something’, in alleged contradistinction to a ‘core concept’ ... More specifically, Meyer and Land associate five properties with threshold concepts: transformative, irreversible, integrative, bounded, and troublesome. But they only say that ‘[a] threshold concept . . . is likely to be’ each, and offer further qualification for some items ... "

Easy enough to understand. Fromm, Rank, Becker, Lasch, Taleb, McLuhan and Skinner have all but assaulted me with these 'concepts' over the last decade or so. Maybe Wittgenstein's "no-theory theory" is next! (I don't think "family resemblance" will be nearly this transformative for me. It seems like it was a mere preliminary which just so happened to inspire more than its fair share of both adoption and resistance?)

Anyway, these threshold-skeptical writers point out that students are a diverse bunch, which (I assume) is to say that what is "transformative" or "previously inaccessible" to one may not be so to another. Seems basic! I wish we didn't have to choose between Woke diversity ideology on one hand and an obliviousness to even the purely circumstantial kind of diversity (i.e. "access") on the other. All concerned here surely have experienced the 'blind spot' phenomenon, where you somehow manage to avoid encountering some small morsel that is basic to your field, and upon becoming aware of it simply cannot believe that it eluded you for so long. Here's one I was just thinking about. I always just called this a 'suspension'. Either I never said that out loud or no one else wanted to say 'retardation' out loud. This is much too small a 'concept' to be "transformative". I mean only to suggest that two centuries of obscene intellectual overproduction have made true experthood all but impossible: there's too much stuff and it's too easy to just miss something. Now that's diversity under the skin!

As for disciplines being contested spaces, obviously this is highly problematic for the monolithic "threshold concept" theory as given above, but I'm not sure what the real problem is. The Wilkinson post asks,

"what would the threshold concepts in psychology be? Would they come from psychoanalysis? Behaviorism? Cognitivism? Humanism?"

Point taken. But I think this shows, also, that if a student shows up to the library asking for help researching "psychology", probably the first thing they need help with is being more specific.


BLS LLB
bls llb - logic (sem -1)

A whole logic textbook!

Just in time for Christmas!

😘


Lucky page 13 brings two music-related posts, one after the other in the search results.

- Melosías , el blog de Josu De Solaun -
... the hairpin debate - the score as map, not territory, nor law: prescriptive versus descriptive notation ...

Ask A Korean!
Once Again: K-pop is Not a Genre

I skimmed these, and I recommend at most that level of engagement. At least I learned where the whole 'Is a hotdog a sandwich?' thing got started, and I also learned that California has enshrined the affirmative answer in law.

These days I'd be waaay more into exploring 'textualism' in law, where it actually seems to matter, than in music notation or performace, where it really doesn't. (But do look for some posts on Kivy's Authenticities sometime in the distant future. I recall he did make a few points that I found novel.)


the Brindle Brothers
Why 'Videogames'?

... every so often someone suggests that we should use another word. The idea is that the word fails to adequately reflect a changing medium and needs to be replaced by one that can handle the job. ... But I use 'videogames', and here's why.

... We have not stopped calling novels ‘novels’ ... It’s implausible, I think, to suggest that the substance of the term ‘causes’ any problems in the sense of distorting our view of the object, simply because it is so well-worn a word as to be divorced from its origin. The novel by any other name would be as dead ... In short, while the word ‘novel’ contains a treasure trove of history, that treasure is buried ... ; it only re-emerges when we ask for it.

Here, finally, is the peculiar obverse of stipulative definition as a power play: stipulative re-definition as a power play.

Is that where 'Sound Art' came from?

Mustn't the Sound Art people have been running from their professional peers rather than from uninitiated audiences? The uninitiated tend to agree that such things are 'not even music', hence enshrining that alleged fact in a name seems superfluous.

But for me to argue this would be somewhat dishonest, because there are concrete reasons why I like the term ‘videogame’ beyond it merely being the one we’re lumped with for better or worse. Unlike the year-old teabag of ‘novel’, it’s still fresh enough to have flavour when you sniff it. Most clearly, it reeks of juvenility.

Good. Accepting ‘videogame’ with our whole hearts precludes being ashamed of our medium. It is populist and demotic ... It sounds like a word by 8-year-olds for 8-year-olds. And as critics we must banish the idea that only those po-faced seriousness are worth our time. We should make a virtue of trashiness, embrace the garish, valorise the vulgar, fuck the haters. Clearly, videogames are about instructing computers to hallucinate vast mazes of desire which channel the human will to knowledge through strange and beautiful paths ... – but, equally clearly, they are also about travelling through time and capturing monkeys in a big net.

Make a virtue of trashiness. Isn't that kind of like making a new word for something that already has a name?

Moreover, ‘videogame’ reminds us that videogames are distinct from sports or board games – and that the ‘video’ part is crucial. As trendy and analytically useful as it is to place videogames in context of other games throughout history, there are three big reasons to stress their differences.

  • Firstly, videogames are automated. A computer moves the pieces, a computer rolls the dice, and a computer enforces the rules, which makes a huge psychological and epistemological difference. Put simply, a videogame is an Other whose computer-generated rules assume an autonomous reality independent of the player. Indeed, videogames (correct me if I’m wrong) are unique in that they do not allow the player access to their actual rules. In other game rules are explicitly known by players, articulated in a common language, and provided by rule book ... Videogame rules are articulated in conlangs designed for machine interpretation and hidden to everyone except specialists. As in spacetime, their existence must be induced by observation.

Constructed Language

  • Secondly, videogames are complicated. The automation described above allows them to enact far more complicated than any other kind of game (bar perhaps the physics which are built into physical sports). There is a limit to the complexity of rules that a book, board or P&P system can incorporate, but computer power raises this limit by orders of magnitude, giving rise to entirely different kinds of game: minute simulations of physical processes, emergent playgrounds, autonomous AI characters. Arguably, in comparison to board games, this has actually led to lazy design. But the difference remains – a quantitative one big enough to actually become qualitative.

The lazy design theory is pretty interesting. But players are "lazy" too; that's why the purported complicatedness of videogames also is arguable. Complexity and simplicity can always be relativized away as so many trees falling the forest with no one to hear them. Many high school musicians, e.g., when given a piece of 'slow' music consisting mostly of half and quarter notes will pronounce the piece 'easy' upon first glance; some will continue to think so even after proving how difficult (for them) the piece actually is. Are they absolutely wrong to think so? It seems more that they (1) have different (lower) standards of performance than their teacher does, (2) can't tell the difference (or think they can't) between a 'student'- and 'professional'-level rendition, and/or (3) confuse notational and conceptual simplicity with ease (for them) of realization. Most can readily be made aware of the hidden (to them) complexity; some would never have noticed it otherwise; and a few might seem incapable of being made to notice. Probably this is just masculine 'character armor' rather than real insensitivity or stupidity, but it's often hard to tell. Those philosophers who fancy themselves Aesthetic Realists certainly have to hope that these are basic breakdowns in communication, comprehension or standpoint. I don't see how that could be the case. Maybe there are some 'uninitiated listeners' in the beginning band, but not in the high school band: the band is the initiation.

  • Thirdly, videogames are visual (except when they’re not). As I said, most of their rules are accessed through representational media, and at the moment that’s mostly the same equipment and the same methods as film and TV. This means that art direction, animation, camerawork, colour, acting and so on are all at least relevant to the player’s experience (and condition her experience of the rules). Videogames are thus a hybrid form, and the relationship of videogames to traditional games is very broadly analogous to that of cinema or theatre: the former subsumes the methods of the latter while excluding some of its other parts.

Those who compare videogames with Chess and Go are right to do so, but – just as a matter of correlation – tend to ignore how many videogames, and how few games of any other kind, are single player. Chess isn’t much different when it’s on a computer, but Braid and Half-Life would be very different if they weren’t.

The question does arise of what we would call a game for the blind or a power plant simulator which was ‘played’ with a full-scale fake control room. The same machinery of simulation and process are involved in these examples, but the screen is absent. That is to say that points 1 and 2 still apply, but that point 3 only sort of does, in that the medium of access conditions the experience of play. To be clear, I really don’t care about establishing a precise definition of videogame and sorting every object in the universe into one box or another. My concern is that advocating the term ‘videogame’ is exclusionary if there are a bunch of games which the term doesn’t fit. As it is, non-visual videogames produced specifically for consumption for the blind are remarkable mainly for their rarity. If they get common enough to establish their own genre (and I hope they do), I doubt anyone would object to ‘audiogames’.

So that’s the video part sorted. But I should pause at least briefly to justify the component of ‘game’. Earlier this year, Raph Koster wrote about the then-latest evolution of the great neo-ludology vs space-narratology debate: ‘x is not a game’. There are a good number of critics – Keith Burgun, for instance, or Tadhg Kelly (see also here) – who argue that ‘game’ has a specific definition not fulfilled by some things we call ‘videogames’. Their definitions very much resembles what critic Jesper Juul, in his book Half-Real, calls the classic game model. Juul goes through various definitions, from Johan Huizinga to Chris Crawford, and tries to synthesise a definition which retains everything they have in common.

But, he says, “the classic game model is no longer all there is to games”; it’s “a snapshot of a specific way of creating 'games', a model that can be traced historically for thousands of years.” Identifying this model as existing within the wider category of things called 'games' is far less presumptuous than trying to take possession of everything signified by that word. In vernacular speech, ‘game’ is used promiscuously to describe everything from Tetris to children’s’ make-believe, and even metaphorically to make distinctions of importance (“it’s all just a game to you!”). Other people aren't necessary either; we call it a game when only step on certain paving stones or play keepie-ups or Solitaire. So unless we really are talking about the classic game model, a definition based on Wittgenstein’s famous family resemblance is more appropriate. If you want to describe a stringent and careful and specific category, use a stringent and careful and specific term. ‘Game’ is not that term.

keepie-ups??

One thing is left to clear up. The actual topic of my conversation with Williamson was on the issue of whether to use ‘videogame’ or ‘video game’. While this is a seemingly unimportant matter of style, I think the answer is clear. Videogame is faster. It’s cooler. It pops, pops, POPS. It’s also more futuristic, in a deeply kitsch way; if you were writing an 80s sci-fi novel and had the choice to write either ‘cyber thing’ or ‘cyberthing’, which would you choose? I rest my case.

Actually, I don’t, because the difference of a space may actually be quite important. In academia, it makes sense to say ‘digital games’ when you need to clarify exactly what you mean and what you don’t. It’s a term intended to be as neutral as possible – or rather, as deliberate, in the sense that it tries to exclude any accidental or irrelevant connotations. Digital games are ‘games’ which are ‘digital’, and as long as you can define those component parts you can define a digital game. This all makes sense. Academia depends on a process of distinguishing, dividing, sifting, clarifying; its whole method is based on divorcing yourself from reality and common sense because it recognises that both of those terms are deceptive. It requires a kind of strategic Platonism whereby you employ abstract categories ‘above’ the conventional ones we see and touch because they better explain an unconventional truth.

Fair enough.

Why do academics require abstract categories 'above' the conventional ones? It's because of what they're trying to do.

Where else do we find people 'trying to do' something which requires this strategic platonism of them? This seems to be the domain of institutions generally. But why? Scale must have something to do with it: the more people over greater distances, the heavier is the Platonic hand needed to coordinate them. Standardized time is an infamous example.

(In fact, the the term seems plenty problematic: is a sports game creatively adapted to computer more ‘digital’ than a board game merely implemented on computer (the distinction is again Juul’s), and if so, which is digital enough for the definition? Does the definition really ‘mean’ to include games like Bop It, which mediates computer power through trad object play? It’s very possible these ambiguities have been resolved and I don’t know about it)

‘Videogame’, by contrast, does not even try to be neutral. And while I’ve already indicated that I like what it connotes, there’s something important about its very failure. It’s messy. It sticks two words clumsily together to reflect a hybrid, amalgamated, sticky field. It represents the form in all the different ways it has to: a cultural artefact as well as a commercial category, a type of memory as well as a type of software. That is to say it indicates an object with substance and specific characteristics, but ones which, because it fails, tetragrammatonically are what they are, and which, as long as it gestures towards an evolving field, will be as hard to control as language itself. It is a word whose paradoxical specificity opens possibilities.

What are those contradictory characteristics? They are of childish and childlike things which had in them the seed of all maturity and which mean old things to older people. They are of trashy, vulgar and profound truths, idle diversions of the highest importance, of recreation that isn’t fun, of contests you can’t win and losses which are really victories (not to mention of having your cake and eating it). They are of futures which have existed and futures which might exist and futures which will never exist, and of a futuristic medium named after a dead movie tape format whose jarring anachronism inexorably madeleines to mind the galaxy of lurid colours and the Conwayesque dance of memory and code remembered and forgotten by Rob Beschizza’s Nomen Ludi – which is, of course, a pun on the title of an Umberto Eco novel about language in history, and Latin, of course, for ‘the name of the game(s)’.

So there.

13 comments:







CdrJameson 20 November 2012 at 03:42

I was reading Grant Morrison's Supergods ... he used the word 'VDU' ...

I might go for VDUgames.

And you're wrong that they're visual. They've just been captured by artists over time, demanding primacy because they're 'video' games. So we get more and more game-worlds that demand appreciation of their tiny, non-interactive, irrelevant details and fixed geometry; Videogame design courses are branched from art departments; Tools like Flash & Unity, that are basically graphics editors with some small, grudging concession to interaction.

Or something.

John Brindle 20 November 2012 at 09:31

Which VDUgames are not visual? Are we talking about Pac-Man here? Or Bard's Tale? Or what? Was the INCREDIBLE visual Doom evidence of 'artist capture'?

Obviously, videogame's aren't purely or primarily visual, but I'm trying to work out what games looked like before they were "captured by artists". Perhaps you're saying that in a previous, visually unsophisticated era, game visuals were 'only' there to represent the rules and did nothing of their own that affected the experience. While there are games in which the visuals are more or less important, I don't think that was ever true.

The significant exception I can think of is interactive fiction. But this form, which has its own name, culture, canon and community slightly separated from (though overlapping with) that of 'videogames', only strengthens my argument. Engaging with IF requires skills of traditional literacy and critiquing it requires trad stylistic prose analysis (as opposed to visual literacy), because it is a form which mixes ludic sets with literature (as opposed to the moving image).

I don't personally have much interest in analysing the visuals of videogames. I have none whatsoever in analysing them as divorced from rules. But it would ludicrous to propose they don't condition the player's experience and the meaning of the 'text'. Likewise to imagine good aesthetics and good mechanics are mutually exclusive.

CdrJameson 21 November 2012 at 03:01

Well, I'm not saying that they're invisible.

For me, videogames are bright colours, chunky squares, clean vector lines, but the emphasis on graphics has over time taken more and more of development budget, influence and attention.

Adventure games would be a prime example of artist-capture (possibly from writer-capture) where the emphasis shifted from narrative to graphics. Now adventure games are fully 3D rendered (eg. LA Noire).

Games development is more and more about the graphics. Unity for example invites you to create a beautifully detailed, hand-crafted but largely static model baked into place for the player to run around. But it's just the background, it's not important to the game.

Guitar Hero & Dance Dance Dance Dance Dance Dance all have whizzy special effects and virtual stadia going on that you can't even see when you're playing - your attention is all on the notes coming down the track. Same with Quicktime events - who cares what's happening in the background? I have to concentrate on which arbitrary button sequence is coming up.

Modern projects have 2-3x as many artists as programmers or designers (10-20x the number of writers) and mostly providing marginal player benefit.

Good aesthetics and good mechanics do go together, but the balance has been skewing in terms of the aesthetics for years.

videogames aren't inherently a visual medium*, they're an interactive medium that happen to have visuals.

That's why there's a version of Doom with no graphics that still works.

*Any more than sports or board games. They both have visuals, but they are not about the visuals.

Hmm. Can we say what something is about? This does seem to suggest that good aesthetics and good mechanics are mutually exclusive: the work is "about" whichever manages to subordinate the other.

On the other hand, if Mumford was right that the aesthetic impulse runs roughshod when not appropriately constrained, then games being 'really about mechanics' is actually the perfect nonconstraint upon 'aesthetics', and it would then be perfectly understandable that an aesthetically hypertrophied form would (still) be "about" mechanics the whole time.

John Brindle21 November 2012 at 05:18

...

The difference is precisely of necessity. The rules of a sport or a board game exist in natural language and are communicated to players as a precondition of play. The players are responsible for enforcing them; they have help from a ref but by and large they self-limit, ie by not just picking up the ball and running with it in football. In videogames, however, players have NO direct access to the rules, which MUST be mediated through some form of representation.

That representation doesn’t have to be visual – hypothetically, it could be audio or even haptic – and, of course, in videogames it usually is these things in addition to video – but in the medium we are talking about the primary form of mediation is usually the moving image. It is the difference between visuals as a component of the rules (eg team colours, coloured tokens) and visuals as the means of accessing rules which exist in a black box.

I’m not saying that all visual changes make any kind of difference. Or that superfluous and pointless visuals aren’t possible. Or that AAA games AREN’T becoming way too focused on that side of things. I just don’t think noting that mediation is a fundamental component of automated games (ie ‘video_games’) in a way that isn't true for sport or board games necessarily functions as taking the other ‘side’ in that conflict. Especially in the context of what I write here - you'll notice I don't spend a huge amount of time waxing lyrical about art direction. If an unjust degree of "attention" is being sucked towards graphics, mine is not among it.

Here's another example from Jesper Juul. We're going to play a game where each of us picks a number between 1 and 9 until one of us has three numbers that can add up to 15. Each number can only be picked once. So:

1) You pick 5.
2) I pick 9.
3) You pick 2.
4) I'm forced to pick 8 (otherwise you could get 8+2+5)
5) You pick 7, thereby threatening to win by picking 3 (3+7+5) or 6 (2+6+7)
6) I lose because my first pick was a mistake.

Actually, this game is formally equivalent to tic-tac-toe. If you take numbers 1-9 and arrange them on a 3x3 grid so that each three-number straight line adds up to 15, you can play both games at the same time. But while the games are ludically identical they are clearly experienced quite differently, and indeed we might actually say the player verbs are different, because the skills and cognitive strategies required certainly are. This is ‘different experiences’ not as a waffly canard but as actually reflecting different characters of play.

This also means that designers control access to the rules. They can choose what the player knows about how the game works and chose to deliberately obscure or highlight different parts of it using visuals. In the first chapter of Braid’s World 4, there are two identical-looking locks, walling in a puzzle pieces. One opens as expected but the other will break your key and actually force you to quit and restart. They are very different rule entities, so the confusion only exists in a purely aesthetic realm – but it serves to disrupt the player’s capacity to decide.

Replies

Mark 5 February 2013 at 05:21

...

I agree with much of this, but I'm not 100% convinced of the hard distinction between rules in videogames vs. sports. I think there are external-to-the-player "rules" in sport, so to speak, or at least there are external material facts and procedures and dynamics that influence gameplay, as they do with computers. In a videogame, the "material" is programmed: the player doesn't necessarily know how SimCity buildings spring up, and the general dynamics of the world, and the player also doesn't enact those dynamics consciously through agreed-on rules. But in sports, there is some of that as well: the player does not necessarily have direct access to the material properties of a turf, how cleats and balls will interact with it, etc. Those are also external to the player and not consciously enforced by him/her as agreed-on rules. You could agree to change them, sure: decide to play on a clay surface instead of a grass one tennis. But that seems more like modding a computer game: in both cases you're swapping out a different set of materials with their own dynamics, rather than making changes to a socially / player-maintained ruleset.

John Brindle 8 February 2013 at 11:07

This is a very good point and it does rather queer the divide I drew. That said, I would note that human beings tend to have a certain amount of 'intuitive' knowledge about the physics of the sporting world. Excepting possible evolutionary adaptations (I'm no expert), this may just be another way of saying that they've had a lot longer to learn. Jonathan Blow talks about how games can be intuitive and natural to players by replicating mathematical truths found in the universe. ... Then again, the same is true when you come to a game with knowledge of its generic conventions or experience of the engine/middleware...

My brother Jimmy Brindle has a radical potential solution to this question, which is to say that 'games' are only made up of rules and those rules are played out on a substrate. So in fact The Videogame, Quake, is an engine which suggests, affords, valences certain forms of voluntary activity. Which is a bit of a mindfuck, I think.

John Brindle 21 November 2012 at 05:18

TOO...MANY...WORDS...

I’m not aware of this graphic-less version of Doom, but I’d be surprised if it actually has NO graphics. I’m guessing it has shit ones, or just uses the automap. In the latter case, that would make for a rather different game. But if it literally has a grey screen, with no variation, and no sound, and no feedback of any kind, it's not more playable than a brick, is it? (And if it's primarily done through sound, then sure it isn't a visual game, but that doesn't really change my argument - it just switches the medium with which interaction has been mixed)

You might well say: “these are just aesthetic differences that affect functionality. They merely affirm the primacy of rules.” This is true. However, it’s also to suppose that the rules of a videogame can ever exist pure and uncontaminated, free of representation. They cannot. For the player, ludic and aesthetic content are experienced simultaneously, neither prior to the other. It might be possible – with a pen, paper, and code literacy – to look through the source code of Red Dead Redemption, understand it all, and make a track of decisions which eventually completes the ‘game’. I would propose, however, that this is no longer playing a videogame.

And it avoids a larger question of whether visuals can have more subtle and various effects. Whether, for example, the vast landscapes of RDR – not possible in the era before artist capture – have any effect whatsoever in how the game is played or what it’s like to play it or what it means as a text. As it happens I think there are a few games where aesthetics are primarily responsible for the player appeal; Robot Unicorn Attack is as good example (one of the earliest things I wrote on this blog, and from which I have cadged some content). And there remains the question of meaning and context. A mod for Left 4 Dead which replaces all the zombies with snarling black people (or, for a little contemporary resonance, the protagonists with IDF soldiers and the zombies with Palestinians) DOES become racist, even though the rules are the same.

P.S. by the way, I think adventure games are a really odd example to use for a CURRENT trend of artist capture. Adventure games have been ‘captured’ by artists at least since Myst came out in 1993. Critics praised the cartoons of Broken Sword and Day of the Tentacle and panned Cryo’s pre-rendered schlock but both testified to the importance of visuals in that genre.

P.P.S. I've always been confused about Guitar Hero's whizzy stuff for precisely the reason you cite. But I wonder now if it actually exists for the benefit of spectators. It's very much a game fit for playing in the same room as a bunch of other people, taking turns and cheering each other on.

CdrJameson 22 November 2012 at 02:43

Unfortunately, I agree entirely with you that a defining feature of games is not explicitly seeing the rules, and with Koster that there's fun in discovering them.

It's why inconsistency is often held up for criticism, and why consistent systems are so pleasing. Nothing more satisfying in Eternal Darkness than getting a zombie to trigger a pressure trap for you, or getting the Doom monsters to fight amongst themselves.

Not knowing the rules is crucial to games with some of the most effective AI absolutely relying on you not knowing the actually simple rules going on underneath. As we see faces in clouds, we see intelligence in what we don't understand, but seems complex.

I just found the focus on graphics a distraction from the central argument - it's a supplementary point and, despite its use in the word, not core to videogames any more than sound effects, music or controllers. A tool to be used, yes, but not vital.

I'm coming from a slightly different perspective though, as I'm literally working on the dictionary definition, and have been thinking about Johan Sebastian Joust etc.

I'm also a programmer, and who did artists capture games from?

Word.

I'm also a musician, and who did artists capture records from??

Unknown 6 February 2013 at 14:13

Stopped reading because you broke the fourth wall.

"You, the reader,..."

Don't do it, ever.

Replies

John Brindle 8 February 2013 at 01:45

...

... This isn't a novel. You're reading a blog on the internet!

Juegos 1 October 2018 at 12:38

Gaming is awesome, makes people get together more than sports do. ...

Hmm. I guess the bad people don't count?


BLS Logic 1
CHAPTER 8. DEFINITION

8. DEFINITION

a) Its purpose- rules and fallacies as per Traditional Definition
b) Modern Definitions-kinds.

A definition is a statement which explains what a thing is. It is a statement that answers the question “What is this thing?”
In giving the definition of the term, it is presupposed that the comprehension of the term is understood, because the definition is based on its comprehension.
Real definition is one which explains & reveals complete nature of thing or object.
However, this is quite impossible since, we do not usually have a full grasp of the nature of things.
It therefore explains the normal acceptance of a simple description as definition of an object. “Definition is an explanation of a thing, word, phrase or symbol that is used in order to explain the defined thing clearly.”
By using a definition, we explain actual things as well as abstract concepts. We can see that there are two parts in any definition. The first part consists of thing that is defined and second consists of words used to explain this thing.
These two parts have specific names in a definition. The part of definition that is explained by rest of words is called the definindum.
The part of the definition that explains the definindum is called the definiens.
So, “a definindum is a thing, word, phrase or symbol that is defined in a definition.” whereas, “the set of words that are used to explain something, or some word or phrase or symbol are called the definiens.
The term “definition” came from the Latin word “Definire” means, “to lay down the markers or limits.”
Definition is a conceptual manifestation either of the meaning of the term or of the formal features of an object. “definire” meaning “to lay down”
Thus, etymologically, to define means: Real Definition. A real definition is one which explains and reveals the complete nature of a thing or object.
However, this is quite impossible since, we do not usually have a full grasp of the nature of things. It therefore explains the normal acceptance of a simple description as definition of an object.

Purposes of Definitions

We use the method of definition in order to know things better. Yet, whenever we define, we always define anything with a purpose.

What has been the purpose of Carroll's various definers?

In order to understand a definition, we must first know why we define.
Let us understand the purposes of a definition. We define anything in order to;

1. Increase Vocabulary.
2. Explain anything clearly.
3. Reduce Ambiguity of word.
4. Eliminate ambiguity of any word.
5. Explain a word theoretically.
6. To Influence attitudes.

Let us see these purposes in details:

1. Increase Vocabulary.
When we are learning any new language, we need to define new words in order to know more words in the language and increase our vocabulary.

2. Explain anything clearly.
When we use any language, some words are not clear enough. At times just listening a word is not enough to understand it. So we need to define them.

3. Reduce vagueness of word.
Some times the meaning of a word depends on the context and without clearity about context, the word appears vague. Definition is necessary at such times.

4. Eliminate ambiguity of any word.
Some words have many meanings and at times are used ambiguously and one does not understand which meaning to use. At such times, definition is of help.

5. Explain a word theoretically.
We have a number of technical terms and words that cannot be understood without definition. It is a correct and clear definition that can help us understand these words and symbols and phrases correctly.

6. To Influence attitudes.
Definition also plays a very important role in the society where people gain by influencing the attitudes of others. At times for social good or for personal good, people define some words or terms in order to influence attitudes.

Rules of Definition:
definition has the power to explain something effectively only and only when the definition is perfect and complete and faultless.
Such a perfect complete faultless definition is called a good definition.
Whenever we want to define anything, we always want to give such perfect definitions, but we seldom know the basic rules of a good definition.
A good definition must follow certain rules in order to be effective.
These rules state that, a definition must set out the essential attributes of the thing defined.
A Definitions should avoid circularity. This means, a definition must not repeat same things in different ways without any meaning where we find that we cannot define "antecedent" without using the "consequent", nor conversely.
The definition must not be too wide or too narrow.
It must be applicable to everything to which it applies.
It must not miss anything out.
Also, it must not include any things to which the defined term would not truly apply.
The definition must not be obscure.
Definition is used to remove obscurity, so using obscure words in definition is meaningless. A definition should not be negative where it can be positive.

These Rules of Definition can be listed as follows:

1. The definition must be clearer than the term that is being defined.

Perhaps this is where the No-Theory theory already says: this is not possible.

The purpose of the definition is to explain and must, therefore be easy to understand. It must not contain terms which will only make it less intelligible.

2. The definition must not contain the term being defined. The definition must use other terms in defining. It is supposed to explain a particular term and is not supposed to use the same term in the explanation.

3. The definition must be convertible with the term being defined. The purpose of this rule is to make sure that the definition is equal in extension with the term being defined. The definition must not be too narrow nor too broad. If the term and the definition are equal in extension, then, they are convertible.

4. The definition must not be negative but positive whenever possible. The definition is supposed to explain what a term or object is, and not, what it is not. Only when a tern is negative should the definition be negative.

Types of Definitions
Definitions are classified into various types by various logicians. At times, some of these types differ from each other so much that they appear to be contradictory to each other. Let us see some of these types classified by these logicians.

One classification is:

  1. Nominal Definition is definition which speaks about a term but not declaring anything about it. This is done by considering the origin of the term, by describing the term, by giving the synonym of the term or by citing an example that will represent the term

Classification of Nominal Definition:

a.Nominal Definition by Etymology
– attained by tracing the origin of the term.
Ex.: Fraternity came from “frater”, which means “brother”.
b. Nominal Definition by Description
– attained by describing the term.
Ex.: A rose is a flower.
c. Nominal Definition by Synonym
– it is done by giving a word equivalent to the term.
Ex.: Being kind is being benevolent.
d. Nominal Definition by Example
– it is done by citing anything that will represent the term.
Ex.: Our Chief Executive is Benigno Simeon Aquino III.

2. Real Definition declares something about the term. This kind of definition serves to explain about the nature and to distinguish it from other terms.

Classification of Real Definition

a. Real Definition by Genus and Specific Difference
- a definition that explains the essence of a term by considering the intelligible elements that make up the term.
Ex.: A triangle is a figure with three sides.
“figure” – genus, “three sides” – specific difference
b. Real Definition by Description
- It is done by stating the genus of the term but altering the specific difference by giving the logical property, which belongs to the term to be defined.
Ex.: A Police Officer is a man bestowed with authority to enforce a law.
“man” – genus, “bestowed with authority to enforce a law” – logical property
c. Real Definition by Cause -It is attained by stating the genus of the term but altering the specific difference by tracing its cause. A cause could be its purpose, function, reason for existence, make-up or origin.
Ex.: A book is a written material made-up of several pages and is a source of information.
“written material”– genus, “source of information”– cause or reason for existence

Second classification of definitions is as follows:
DENOTATIVE DEFINITIONS try to explain the meaning of a word by mentioning at least several objects it denotes.
Although we might not view these strictly as definitions, they are, nevertheless, frequently called "denotative definitions."
Among connotative definitions, two different kinds are worth mentioning,

  1. Ostensive definition,
  2. Definition by partial ennumeration

Among denotative definitions, ostensive definitions stand out as especially common and useful.

  1. Ostensive definitions are definitions by pointing.

When a young child wants to know the meaning of the word “dog" we are apt to point to a dog and call out the word "dog."
This is an example of an ostensive definition.

  1. A second type of denotative definition worth mentioning is a definition by partial enumeration.

Definitions by partial enumeration are simply lists of objects, or types of objects, to which the word refers.
The list, "beagle," "cocker spaniel," "dachshund," "greyhound," "poodle," provides an example of a definition of dog is by partial enumeration.

While denotative definitions might not really seem much like definitions, they do ultimately attempt to convey the meaning of a word, at least indirectly.
For the hope is that by citing the objects the word refers to, the people we are talking with will come to see what that word means.
However, let's turn now to definitions in the more ordinary sense of the term.

CONNOTATIVE DEFINITIONS are usually formulated in the following three ways:

  1. X is Y. Example: A bachelor is an unmarried man.
  2. The word "X" means Y. Example: The word "Bachelor" means unmarried man.
  3. X =DF. Y. As an example: Bachelor =DF. unmarried man.

In all these cases the term on the left "bachelor" in the above examples is the one being defined, and we call it the "definiendum."

While we refer to the terms used to define this word "unmarried man" in our example, collectively as the "definiens."

Among connotative definitions, perhaps five different kinds are worth mentioning,

(1) persuasive definitions,
(2) theoretical definitions,
(3) precising definitions,
(4) stipulative definitions, and
(5) lexical definitions.
Let us see these definition types in details:

  1. Persuasive Definitions: The purpose of a persuasive definition is to convince us to believe that something is the case and to get us to act accordingly. Frequently definitions of words like "freedom," "democracy," and "communism," are of this type. (E.g., taxation is the means by which bureaucrats rip off the people who have elected them.) While these sorts of definitions might be emotionally useful, we should avoid them when we are attempting to be logical.
  2. Theoretical Definitions: Theoretical definitions explain by a theory. Whether they are correct or not will depend, largely, on whether the theory they are an integral part of is correct. Newton's famous formula "F = ma" (i.e. Force = mass x acceleration), provides a good example of such a definition.
  3. Precising Definitions: Precising definitions attempt to reduce the vagueness of a term by sharpening its boundaries. For example, we might decide to reduce the vagueness in the term "bachelor" by defining a bachelor as an unmarried man who is at least 21 years old. We often encounter précising definitions in the law and in the sciences. Such definitions do alter the meaning of the word they define to some extent. This is acceptable, however, if the revised meaning they provide is not radically different from the original. Sometimes by providing précising definitions we can reduce the potential for verbal disputes that are based on a term's vagueness. When A and B begin argue about whether a bicycle is a vehicle we try to get them to recognize that term "vehicle" contains vagueness. Once they have seen this, we can make them agree to reduce it by providing a précising definition.

And again, the No-Theorist says: vehicle is not vague at all. It has a use.

Rosch in fact gives "vehicle" as an example of a 'superordinate semantic category'.

  1. Stipulative Definitions: Stipulative definitions are frequently provided when we need to refer to a complex idea, but there simply is no word for that idea. A word is selected and assigned a meaning without any pretense that this is what that word really means. While we cannot criticize stipulative definitions for being incorrect, and so, the objection, "But that isn't what the word means" is inappropriate); we can criticize them as unnecessary, or too vague to be useful.
  2. Lexical Definitions: Unlike stipulative definitions, lexical definitions do attempt to capture the real meaning of a word and so can be either correct or incorrect. When we tell someone that "intractable" means not easily governed, or obstinate, this is the kind of definition we are providing. Roughly, lexical definitions are the kinds of definitions found in dictionaries. Frequently words that are first introduced in the language as stipulative definitions become, over time, lexical definitions. (Consider, for example, Winston Churchill's famous use of the expression "iron curtain.") Besides synonymous definitions, definitions by genus and difference are perhaps the most common type of lexical definition. The essential characteristic of these definitions is we are defining the definiendum by using two terms in the definiens. For example, in the definition, "a bachelor is an unmarried man," we are defining the word "bachelor" in terms of "unmarried" and "man." In this definition the term "unmarried" is the difference, while term "man" is the genus. (The difference, or difference term, qualifies, or says what kind of thing, the genus is.)

Third classification of definition is as follows:
This list has seven kinds of definitions.
1. Stipulative Definitions stipulate, or specify, how a term is to be used.
Sometimes stipulative definitions are used to introduce wholly new terms, othertimes to restrict (or narrow) a meaning in a particular context.
The former use may be seen in the immediately preceding example, where the new term "oxycodone" is being introduced as an abbreviation (mercifully) for the mouthful "dihydrohydroxycodeinone".

2. Lexical definitions, or dictionary definitions, are reports of common usage.
Such definitions are said to be reportive or reportative definitions.
They are true or false depending on whether they do or do not accurately report common usage.
In addition, if the dictionary is published by a prestigious firm and is compiled by competent and respected lexicographers, then the definitions are normative.
The definitions both report and regulate common usage. It thus becomes possible to say of a given person that s/he is misusing a particular term.
If a person's use of a term is at great variance with how that term is regularly used, and if that person does not stipulate that the term is being used in a specialized nonstandard way, then s/he is using that term incorrectly.

3. Precising definitions are used to refine the meaning of an established term whose meaning is vague in a context and which needs improving.

4. Theoretical definitions is unique to science and philosophy and do not occur in ordinary prose. This is an overly restrictive analysis; theories are not unique to science but characterize virtually all our thinking.

5. Operational definitions explain the way in which a scientific function works. This type of definitions have disappeared in physics; occasionally, however, one will still find instances of them in psychology.

6. The definiens in a recursive definition is typically in two parts: a so-called 'basis' clause in which the definiendum does not occur, and a so-called 'inductive step' in which the definiendum does occur. At first the definition may appear to be circular since the definiendum explicitly occurs in the definiens. But the circularity is only apparent, since the basis clause offers a non-circular entry to – not a circle – but a 'chain' of an indefinite number of 'links'.

7. Persuasive definitions are simply intended to influence attitudes and generally do violence to the lexical definitions. When people begin to cite definitions in a heated argument, it is a good bet that they are making them up.

Fourth and all exhaustive classification:

In short, we can classify the definitions in the following manner:

1. Real ==== a) Ostensive, b) Extensive
2. Nominal = a) Lexical, b) Bi-verbal, c) stipulative, d) per genus et differentium
These types can be seen in details as follows:
1. Real definition: A Real definition is the definition of something that exists. This means, we can use the real definition for explaining things that exist and that can be objectively studied.
We have two sub classes of this definition type.
These are, a) Ostensive and b) Extensive. Let us see them in details:
a) Ostensive Definition is the method of defining any thing by pointing it out. When we show some object in order to define it, we use the ostensive definition.
b) Extensive definition is the definition where we give examples in order to explain something. When we want to define anything, we list out some of the members or things or types that belong to the group indicated by that word.

2. Nominal definition: A nominal definition is a definition of a word, phrase or symbol. When we wish to define or explain any word, phrase or symbol, we use this type of definition. This means, we use nominal definition when we are defining any concept created by human beings in any language of humans.
The nominal definitions have four sub-classes. These subclasses are, a) Lexical, b) Bi-verbal, c) stipulative, d) per genus et differentium.
Let us see these sub-classes in details.
a) Lexical definition gives a dictionary meaning of a word, or defines a word as it is used by any community or group of people.
b) Bi-verbal definition defines a word by using another word or a phrase by using another phrase. But if while doing this, the definition is not making the actual meaning adequately clear, the definition commits fallacy of synonymous definition.
c) Stipulative definition is given when someone is assigning a meaning to a word in order to influence attitudes of twist the actual meaning of the word. This definition may or may not tell the real nature of the word defined.
d) Per genus et differentium is the type of definition where we define a word by stating the group to which it belongs, i.e. the genus; and the factor that still differentiates the given word from rest of the group, i.e. the differentia. We use this definition when we are classifying something that is being defined and also showing that though this thing belongs to that group, it is still different from rest of the group members because it possesses some quality that makes it stand out.

Fallacies of definition.
When a definition is not appropriate, it commits a fallacy. Fallacies of definition are the various ways in which definitions can fail to explain terms. The phrase is used to suggest an analogy with an informal fallacy. "Definitions that fail to have merit because they are overly broad, use obscure or ambiguous language, or contain circular reasoning are called fallacies of definition."
The major fallacies are; overly broad or Too Wide, overly narrow or Too Narrow, Mutually exclusive definitions, Synonymus definitions, Obscure definitions, Self-contradictory definitions & circular definitions.
Fallacies in definitions are listed as follows:
1. Too Wide definition is the definition that applies to things or members to which that word actually does not apply.
2. Too Narrow definition is the definition that excludes many things to which the word being defined actually applies.
3. Mutually exclusive definitions are the definitions where we find some qualities that do not belong to the word defined. The definiens of mutually exclusive definitions list characteristics which are the opposite of those found in the definiendum. e.g. a cow is defined as a flying animal with no legs.
4. Synonyms definitions are the definitions where one word is defined by another without explaining any of them clearly.
5. Obscure definitions are definitions using inappropriate language or the language that feels odd, but does not explain anything about the word in question..
6. Self-contradictory definition occurs when the definindum used two contradictory qualities together in explaining the definiens.
7. Ambiguous definition is the definition where a word has many meanings & we are using an inappropriate meaning while defining it in some situation.
8. Figurative definition is the way to define something using decorative language. Such a language may or may not explain the word appropriately.
9. Circular definitions If one concept is defined by another, and the other is defined by the first, this is known as a circular definition where neither defenins nor definindum offers enlightenment about what one wanted to know

Limitations of definition
Given that a natural language such as English contains, at any given time, a finite number of words, any comprehensive list of definitions must either be circular or rely upon primitive notions.
A question naturally arises when we start defining things. This is, if every term of every definiens must be defined, by itself, where at last should we stop?
A dictionary, for instance, insofar as it is a comprehensive list of lexical definitions, must resort to circularity
Many philosophers have chosen instead to leave some terms undefined. The scholastic philosophers claimed that the highest genera; the so-called ten generalissima cannot be defined, since a higher genus cannot be assigned under which they may fall. Thus being, unity and similar concepts cannot be defined.
John Locke supposes in An Essay Concerning Human Understanding that the names of simple concepts do not admit of any definition. More recently Bertrand Russell sought to develop a formal language based on logical atoms.
Other philosophers, notably Wittgenstein, rejected the need for any undefined simples. Wittgenstein pointed out in his Philosophical Investigations that what counts as a "simple" in one circumstance might not do so in another.
He rejected the very idea that every explanation of the meaning of a term needed itself to be explained: "As though an explanation hung in the air unless supported by another one", claiming instead that explanation of a term is only needed to avoid misunderstanding.
Locke and Mill also argued that individuals cannot be defined.
Names are learned by connecting an idea with a sound, so that speaker and hearer have the same idea when the same word is used. This is not possible when no one else is acquainted with the particular thing that has "fallen under our notice".
Russell offered his theory of descriptions in part as a way of defining a proper name, the definition being given by a definite description that "picks out" exactly one individual. Saul Kripke pointed to difficulties with this approach, especially in relation to modality, in his book Naming and Necessity. There is a presumption in the classic example of a definition that the definiens can be stated. Wittgenstein argued that for some terms this is not the case.
The examples he used include game, number and family. In such cases, he argued, there is no fixed boundary that can be used to provide a definition.
Rather, the items are grouped together because of a family resemblance.
For terms such as these it is not possible and indeed not necessary to state a definition; rather, one simply comes to understand the use of the term.


roškofrenija
Richard Marshall intervjuira filozofe 1: Robert Stern, Mitch Berman, Andrei Marmor, Meir Dan-Cohen, Christine M. Korsgaard

Hegel’s modest metaphysician
Richard Marshall interviews Robert Stern.

...



RS: ... I think McDowell is right to think of Hegel as trying to ‘complete’ Kant’s project ... Kantians will resist the suggestion that his project needs completing in the first place, while even non-Kantians will differ over where exactly they take the difficulties to lie ... it is increasingly being recognized that Hegel is not the only option available to us here, but that Fichte, Schelling and others may have a claim to be preferred. ...

And you are right that this difference between Hegel and McDowell can be related to the question of quietism. Although the issue is complex, quietism may perhaps be thought of as a combination of two views: (a) the claim that philosophical problems can be dissolved rather than directly answered, if it is shown that the framework that gives rise to the problem is itself questionable or misconceived; and (b) the claim that the way to do this is not through more philosophy, in the sense of taking on further metaphysical or ontological commitments, but by turning to our linguistic practices, or common sense, or our ‘form of life’. Now, while I think his dialectical approach means that (a) can be found in Hegel, I am less sure about his commitment to (b). This is because, I think, Hegel held that we cannot avoid having metaphysical commitments, as these are implicit in our language and our ordinary ways of thinking, so there is (so to speak) no escape from philosophy here – instead, we must rather try to do philosophy better, in a way that enables us to get beyond the problems which need to be dissolved.

Now, when it comes to McDowell, it is clear that he opts for (a) as well; and in so far as he is a quietist, it may be that he also hopes to opt for (b). But in fact (as some of his critics have argued), it is not always clear that he actually does so. So, for example, in Mind and World, McDowell hopes to show us how we can avoid the fruitless debates between ‘rampant Platonism’ on the one side, with all its spooky supernatural commitments, and ‘bald naturalism’ on the other, with its reductionism and excessive scientism, hence dissolving the apparently intractable philosophical issues that neither side can really answer satisfactorily.

This approach therefore seems to fit well with (a), and to echo Hegel’s dialectical strategy. But McDowell’s way to achieve this seems to be to argue for a ‘partial re-enchantment of nature’ as a middle way, but where this looks like it might itself be a metaphysically minded suggestion or a form of ontology concerning the way in which values, reasons, norms and so on can be fitted into the natural world; but if so, this would not comply with the second aspect of quietism mentioned above. From my Hegelian perspective, this would not itself be a problem, for as I have said, I think Hegel would also reject (b); and then if so, McDowell’s position would again turn out not to be so far removed from Hegel’s after all – though it would perhaps be further from one of his other heroes, namely Wittgenstein.

I have obtained a copy of Mind and World but have not yet read it.

...


3:AM: You say that there is also a standard objection to Hegel in relation to a whole bunch of post-Kantian philosophers such as Schelling, Feuerbach, Kierkegaard, Nietzsche even Derrida and Deleuze. This is the Nietzschean objection that concepts distort because they generalize and so they can’t pick out the individuality in the word. Is Nietzsche right in this challenge, or do you think Hegel’s notion of the ‘concrete universal’ developed in his Logic gets to grips with the problem?

RS: Yes, I see this as a thread in the critique of Hegel that goes back to the very early days of the reception of his work, and shapes a lot of the issues that we now associate with ‘continental’ philosophy, which is a kind of suspicion that reason, ideas, concepts and thought in general distort or cut us off from reality, and that ultimately some other kind of access to it is required – where the difficulty is that things are inherently individual, specific and particular, whereas thought involves concepts that are inherently general, arrived at by a kind of abstraction from the concrete specificity of things. The worry is, then, that thought leads us into a realm of unreal abstractions, away from the concrete reality of lived experience and an immediate grasp of beings in their unique individuality.

Now, as I see it, Hegel’s doctrine of the ‘concrete universal’ was designed precisely to try to address this worry. According to Hegel, when we form the concept of what he calls ‘abstract universals’, we do in a sense abstract from the difference between things in various ways, and so move from what is specific to what is general: so, for example, when I form the concept ‘red’ by looking at a red bus and a red book, I ignore the differences between the bus and the book, and just focus on what makes them similar.

However, in the case of Hegel’s ‘concrete universals’, he thinks, the situation is different: for example, suppose I have the concept ‘human being’. To grasp that concept, I cannot just abstract away from all the things that make individual human beings different and just focus on what makes them similar, as arguably I would have virtually nothing left and certainly not the richness that is characteristic of our concept. Thus, Hegel argues, the various individual ways of being a human being are included within our concept, so that this concept is not really such an abstraction after all. In this way, he thinks, it is a mistake to think of the concept of a concrete universal like ‘human being’ as ‘hollow and empty’ or somehow cut off from the individuals that exemplify it; on the contrary, they are included under the concept, as part of what thought grasps in grasping the concept itself.

...


3:AM: In your book Hegelian Metaphysics you show how Hegel has been influential in the so-called continental tradition, and in particular you defend Hegel against Deleuze and his metaphysics. So what’s this argument about? How seriously should we take Deleuze as a philosopher?

RS: As I mentioned above, I think a central issue for continental philosophy concerns the relation of thought to things, and in particular whether the generality of the former can grasp the individuality of the latter. I therefore see Deleuze’s emphasis on difference as an aspect of this debate, encapsulated in his objection that philosophers like Aristotle, Leibniz and Hegel wanted to capture what makes one thing different from another in conceptual terms, which he thinks cannot be done, as in the end concepts can only tell us what things have in common, not what makes them unique or what they are qua individuals – which is why instead Deleuze seeks to return the medieval idea of ‘haecceity’, or primitive ‘thisness’.

Now, I think this is a fair worry to have about Hegel, but (on good days, at least) I think Hegel has an adequate way of responding, partly by attacking the doctrine of haecceity (as he does, for example, in his account of the sense-certainty in the Phenomenology), and partly by showing how his doctrine of the concrete universal avoids the problem Deleuze raises, as here the universal contains within itself an important element of difference. But I agree that the issues here are extremely complex, ...

...





The endless search for truth
Richard Marshall interviews Andrei Marmor.

...


AM: ... One of the main challenges we face in legal philosophy is to try to explain in what sense the requirement of law are binding, and what is the kind of reasons for action they purport to provide. When the law tells you to do (or not to do) something, it typically tells you that you ought to do it, and you ought to do it partly because the law says so.

So the question here is, at least, twofold: first, to explain what kind of ought is meant here; is it necessarily a moral ought? If it is, what gives the law this necessary moral dimension? If it is not necessarily a moral ought, what other kind of ought might be in play here? Second, why would it matter that the law tells you to do it? If you have reasons to do it anyway, the law is redundant; and if you have no reasons to do it, what reasons the law can create, and on what grounds?

... One of Joseph Raz’s main contributions to philosophy of law consists in his insight that we need to think about these questions about the normativity of law in terms of a general theory of practical authority. The law is, according to Raz, essentially authoritative in nature, and therefore the kind of reasons for action the law gives us must be considered in light of what practical authorities in general are, what would make an authority legitimate, and how authorities purport to change our reasons for action. I think that this main insight is a very important one and it certainly advanced the ways we think about law’s normativity.

Over the years I came to disagree with Raz on many of the details ... But I think we owe Raz a great deal ...

3:AM: This approach has been criticized as being a version of conceptual analysis. This approach hasn’t been an overwhelming success elsewhere, to say the least. So why do you think the approach has merit?

AM: As I explained above, this whole issue of conceptual analysis is a red herring. One of my recent articles in legal philosophy is entitled: “Farewell to Conceptual Analysis (in Jurisprudence)”. It is really time to put this anachronism to rest. Analytical legal philosophy is no more conceptual analysis than any other field in contemporary analytical philosophy and the methods it employs, whether linguistic or other, are really ill described as an analysis of concepts. The conceptual analysis that was central to Oxford philosophy in the 1950s may have influenced contemporary legal philosophy, as it did in other areas, but it is no longer the kind of philosophy we do.

3:AM: The role of interpretation in law is central to Dworkin’s approach. He thinks everything is open to interpretation. Your disagreement and your work on the role of interpretation in law is a key part of your work. So can you explain the role of interpretation in law so we can see that Dworkin’s view is untenable?

AM: As I mentioned earlier, I think that Dworkin has a lot of interesting and insightful things to say about the nature of interpretation and I happily endorse many of them. My main disagreement with Dworkin on this issue is about the ubiquity of interpretation. I just don’t think that it makes much sense to claim that everything is subject to interpretation, especially with respect to linguistic communication. “Everything is interpretation” is no more true or helpful than the idea that because you can, if you will, doubt anything, therefore everything is in doubt. Just about anything meaningful can become, in a certain context or under certain conditions, an object of interpretation. But this does not mean that we engage in some interpretative reasoning whenever we grasp a piece of linguistic communication. Interpretation, as I said earlier, aims to clarify something that is, in a given context, unclear or not sufficiently clear. But you can only seek to clarify something on the background of much else that is clear enough, at least in the relevant context.

3:AM: You discuss a broader idea of interpretation than one limited to law. You discuss interpretation in art. Can you say how you approach this and how interpretation in art differs from when used in law, and also say where there are overlaps?

AM: There are, indeed, many similarities between interpretation in art and interpretation in law.

Buckle up, kids!

In both cases we seek to understand the content or the significance of a given object or text in light of some reasons or interests we have in the relevant type of objects/texts in question.

Hmm. But he just said

Just about anything meaningful can become, in a certain context or under certain conditions, an object of interpretation. But this does not mean that we engage in some interpretative reasoning whenever we grasp a piece of linguistic communication.

Similarly, I would say:

this does not mean that we seek to understand the significance ... whenever we look at art.

The main difference, however, is this: works of art are created as objects of interpretation, they are created (partly) as an invitation to appreciate their aesthetic and artistic features, to appreciate the kinds of achievement they manifest, and the like.

Hmm. But we just heard that

“Everything is interpretation” is no more true or helpful than the idea that because you can, if you will, doubt anything, therefore everything is in doubt.

Apparently invitations are cheap.

Law, however, is not created to become an object of interpretation – law is created to guide human conduct; it has practical purposes. Making a law is not an invitation to the public to offer interpretations of it or to appreciate its legal qualities.

And yet law is interpreted and re-interpreted ad nauseum. Daresay, my conservative friends do verily appreciate the law.

Apparently intentions are cheap, too.

Second, there is a clear sense in which art is an essentially contested concept. That is, the word “art” stands for a certain form of human achievement, an aspiration for excellence that is essentially contested, inviting different views about what kind of achievement it is and what its standards of excellence are. Each work of art contributes to this cultural debate, as it were, making an implicit statement about its conception of art, what the creator values in it, etc. And that is why the question of whether a given artifact is a work of art or not, and the question of what we value about art, are very closely linked.

They are linked by theorists, certainly, because theorists are guessers. To make a guess, you need at least a few "links" in place; say, as here, between (rational) ontology and (empirical) valuation. Otherwise you're not even guessing, you're just enjoying the sound of your own voice.

If we look and see, however, the first thing we notice is all the bad art. Carroll, throughout his work, is uncharacteristically sharp on this point. The problem of bad art is highly destructive of lots and lots of "theory". It shows that arthood and value vary independently; also that the contested terrain, while irreducible, is small and discrete.

Legality, however, is not an essentially contested concept. We do not regard legality as a form of human achievement, inviting, as it were, different conceptions of what makes it an achievement, what it is an achievement of and the standards of excellence we associate with it. The making of good law – morally, politically, economically or otherwise – is of course a form of achievement. We may have different conceptions of what would be a good law in this or that domain, but the relevant issue that is essentially contested here is not the legality of the lawmaking but the moral or other evaluative dimension of it – that is, the contested element here is the “good,” not the “law.”

3:AM: If interpretation is more limited in scope than some would have thought, then we might want to ask how we fix understandings without interpretation. Social conventions are often thought of as doing this. So you see social conventions as a species of norms, rules regulating human conduct, and also as arbitrary. You make a distinction between deep and surface conventions. Can you explain this? You also distinguish between coordination conventions and constitutive conventions. The first position is associated with David Lewis I believe and you take issue with him don’t you? Can you say something about the basic approach you take to conventions and why they are important to practical reasoning? And how do you respond to criticism that your account slides between different levels of comparison – for example, between rues for chess and rules for following traffic regulation?

AM: Let me see if I can summarize the main ideas of a book-length project in a few sentences: there are countless social rules, of various kinds, we follow in our daily activities. These rules and social norms are there for reasons, there are reasons for following them. Now, some rules are basically determined by the reasons to have them. A rule to avoid humiliation or torture, or such, is the kind of rule whose content is determined by the reasons for having it. However, countless rules we follow are not like that: There are reasons for having them, but those reasons do not fully determine the content of the rule we follow. There are reasons, for example, to greet or otherwise acknowledge acquaintances when we meet them, but those reasons do not determine the actual content of the greeting conventions followed in different societies.

That is the sense in which a conventional norm is arbitrary: the content of the rule is under-determined by the reasons for having it. And then, part of the reason for following the rule depends on the fact that this is the rule that happens to be followed by others in the relevant community. So these are the kind of rules or norms I am interested in and that I think we by and large call social conventions.

Now, David Lewis had a rather ingenious idea about the nature of conventions and the rationale of following them: the idea was that these rules emerge as solutions to large-scale recurrent coordination problems. This would nicely explain why reasons for following conventions depend on general compliance, and it would also explain the sense in which conventions are under-determined by reasons. I think that Lewis is quite right about numerous cases, many social conventions have this coordinative rationale, including some aspects of language.

However, I argue in my work that there are many other cases, that there are many social conventions that cannot be explained in terms of the solution they offer to a pre-existing recurrent coordination problem. I call these constitutive conventions because, as I try to show, these are the kind of conventions that function as constitutive rules of a certain type of human activity, like playing chess or football, or forming distinct artistic genres, etc.

The distinction between deep and surface conventions is different matter, and a bit more complicated. The intuitive idea here is that there are some conventional practices where we do not follow the underlying conventional norms directly but by following more concrete and more shallow conventions that instantiate them. For example, there is an underlying conventional practice of showing respect for others in certain circumstances by one’s dressing up in certain ways; notice that other cultures might have different conventions here, say, showing respect by some outward appearance that is not a dress code but some other means. However, this deep convention of showing respect by a dress code in itself does not tell how to dress up for, say, a wedding or a funeral… the conventions we follow are the more concrete or specific local and shallow conventions of say, requiring men to wear a suit and tie to a wedding ceremony or such.

These two distinctions, however, between coordination and constitutive conventions, and between deep and shallow conventions, are quite distinct, they pick up different types of phenomena.

3:AM: When you look at conventions of language you disagree with those who say that language is hugely conventional. Presumably Wittgenstein is the parade case for this idea, but also Searle is in trouble if you are right. Can you say why you disagree with what for many has been taken to be an important fact about language that words in natural languages are conventional? Is this a version of Emma Borg’s semantic minimalism? And when you argue that borderline cases typically contain conventional variations of meaning is this the argument that vagueness is application of conventions to literal meaning?

AM: Truth to be told, I am not sure who is in trouble here, if anyone is. The question I deal with, that I think you refer to here, is whether the literal or lexical meaning of words in a natural language, the Fregean “sense” if you like, is conventional or not. This question is not central to semantics, by any means. In fact, very few philosophers of language even posed this question or considered it in any detail. And I am certainly not claiming that the conventionality of literal meaning is something that should become a central issue in semantics.

But it is an interesting question, nevertheless, and partly because it allows us to test our theory of conventions and see how it handles such problematic cases. Now, my argument that literal meaning is much less conventional than generally assumed rests on the rather simple idea that in most cases, though by no means all, there are reasons for having words designating certain things and those reasons pretty much determine what the word means.

In other words, my suggestion is simply this: if you agree that conventions are rules under-determined by reasons, you can easily come to see that the meaning of words in a natural language is not as conventional as one might have thought.

But notice that I am talking about meaning or sense, and not about the notation, or the sound-sense relations. Those are, no doubt, by and large conventional. The fact that in English the word “chair” stands for chairs, is of course, entirely conventional; other languages use a different sound to stand for the same word with the same literal meaning. But the fact that we have a word to designate these types of artifacts is not arbitrary in the relevant sense, and the reason for designating those artifacts by a certain word pretty much determine what the word means or stands for. Now true, if this is correct, then one should have some doubts about some of Wittgenstein’s ideas, in particular, one should seriously doubt the ubiquity of family resemblance concepts. And I have expressed my doubts about family resemblance concepts in my book in some detail.

3:AM: I think you argue for a version of pragmatism, the view that meanings depend on conversational situations over and above the pure semantic properties of words. In normal conversation pragmatics help regulate the cooperative exchange of information, in law cooperation is exchanged by strategic interactions. Is this right? Can you say something about this?

AM: I don’t think that any philosopher of language these days would doubt the essential and ubiquitous role played by pragmatic factors in linguistic communication. My interests in the pragmatic aspects of communication in law pertain to some unique features of legal discourse, in particular, to the fact that communication in law, as in many other different contexts, is often very strategic in nature.

Strategic communication requires some modifications to the basic Gricean model in pragmatics, and this is partly what I aim to do. In this case, law is just an example, it is an example where we cannot simply rely on the Gricean model of conversational norms because it is just not the kind of conversation where parties aim at a cooperative exchange of information. I am not saying that Grice was wrong, just that we need to modify and extend his model to account for different types of conversation, and strategic conversation, of the kind we find in law, is one such central case.


Oh, For The Love Of Art!
Artist Profile: Martin Kippenberger

Born: February 25, 1953 in Dortmund.
Died: March 7, 1997 in Vienna (cause of death – liver cancer)

I like Kippenberger partly because he always puts a smile on my face. Even though he was a pretty major alcoholic, he didn’t have the “tortured artist” persona that some of my other favorites have. ...

🧐 🧐 🧐

An interesting work, which I unfortunately couldn’t find a picture of online, is a shelving unit readymade piece that Kippenberger painted gray and named ‘Wittgenstein’, after the philosopher. Matthew Collings writes, in ‘This is Modern Art’, “…Wittgenstein was a favorite author among Minimal and Conceptual artists of the 1960s…it seems appropriate because grey was the favorite colour of both those movements.”

😴 😴 😴


Histories and Theories of Intermedia
THOUGHTS ON MUSIC AND THE AVANT GARDE, Chris Cutler

1. Art and anti-art.

Art as currently understood is neither essential nor timeless. References to 'primitive art,' 'medieval art' or 'the art of ancient Greece' create confusion by conflating fundamentally different and functionally incommensurate social practices1.

i.e.
Something is wrong
with this 'use of the language'.

We can't both be right
(i.e. speak right).

I tend to think that the social practices are incommensurate, but that we are not restricted to looking only at "social practice"; not, that is, unless we want to be so restricted, as most of us butt-sniffing Social Animals seemingly do.

If we do use the same word
for different "social practices",
that is our hint
to keep 'looking'.

In our own time, vernacular understanding of what art is has evolved out of the category of Fine Art, coined in the eighteenth century to claim an elevated status for specific artisanal practices as a connected sphere of autonomous cultural production. Two hundred and fifty years later, this status is about the only thing that survives intact, every other original attribute assigned to the term having been incrementally rejected, ...

Another clue, starin' right atcha bro.

Why should elevated status persevere
and every other original attribute
be rejected
?

Could it be
that
the practice of status
is
both
essential and timeless
?

Perhaps
a connected sphere of autonomous cultural production
is
its foremost precondition
?

Yet this sphere is,
like all human achievements,
difficult to build,
impossible to repair,
and
easy to destroy
?

Like,
it took two thousand five hundred years
to build,
two hundred and fifty years
to exist,
and
twenty-five years to destroy?

VII. The birth of anti-music.

It was not until the late 1940's that the qualifier avant garde was finally attached to a music, and then it was to the output of composers connected through the Darmstadt Summer school, ... Some of its celebrities were also immodestly capable of gross intolerance: ... Like other neo-movements, this 'avant garde' looked initially back to the past, extolling and extending proposals made by Schoenberg, Webern and Berg some thirty years earlier, but now raising them to the status of a dogma15. ... At the same time, in the United States, John Cage and his associates ... did not consider themselves an avant garde at all but rather 'experimental' artists;16 indicating perhaps that the older term had already become associated with reactionary, proscriptive, intolerant and authoritarian attitudes. It was the experimentalist, Cage, and not the Darmstadt avant garde who arrived at the genuinely radical musical equivalent of the philosophical move first made by Duchamp with his readymades.

... Duchamp had taken an object ... and had submitted it for exhibition. If [Fountain] was an artwork - and it claimed to be - it took the form of a nest of questions.

Indeed, submission for exhibition does betoken precisely this claim. It betokens a "claim" but not an 'intent'. The claim can be read off the act behavioristically, whereas the intent cannot be. There are infinite possible intents which could motivate such a claim; and there is the possibility, remote as it seems, that the claim did not figure in the submitter's intent, and that they are oblivious to the impersonal social fact of the claim.

Unfortunately our author can find no better way to phrase this than that it, the work itself, is the entity making the claim. Wittgenstein and Friends would have had a field day with that one.

With his 1952 composition 4'33", Cage issued a similar challenge to music. ... any sound actually heard in performance would not have been determined by the score (although it could be argued strongly that it was intentionally included in it)

. . . or at least that it was not not intended . . .

and would be completely outside the control of the composer. Of course, commonsense says silence is not music, nor is unintended noise. The event, however, took place in a concert hall and was announced as a composition by a recognised composer. If it was a composition - and it claimed to be - it took the form of a nest of questions.

Well, sure, in a manner of speaking.

On the other hand, commonsense is little else than the disinclination to ask such questions. I think the shock of the new tends to elicit questions whose answers the questioner thinks they already know. i.e. These questions are either 'performative', or else they are genuine distress signals.

Am I now playing the mindreader? I'm not so sure. Again,

You speak as if you are asking a question about the dog and the cow. But you know the facts about them.
...

In asking me this question you are treating me like a judge of the High Court who is considering a question of law not of fact, ... Now I can of course give a decision if that is what you want. But you want more than that.

The framing of this article lends itself perfectly to this analysis.

Silence is not music.

The event  took place in a concert hall.

It was announced as a composition by a recognized composer.

If these are the "facts", and if they are "known", then one can ask only "questions of law, not of fact" in regard to them. I've been out of the weird music scene for a while, but this is exactly what I recall.

To my mind, Danto is at his most convincing on just this issue. Certainly, for such works ever to be widely accepted as art there does have to be some change in attitude, beyond the inevitable passing of all the old "commonsense" people and notions. I just think it's incredibly dunderheaded to name this change as a change in people's 'theory' of art; i.e.,

terrain is constituted artistic in virtue of artistic theories.

Again, this is something of social fact, observable behavioristically; but we cannot read these people's minds, and therefore we cannot assume that we know what is motivating the change in attitude. Probably we cannot even say with any confidence exactly what an 'attitude' is, nor whether a change has been effected via 'theory' or via some less exalted route: peer pressure, brute-force exposure, an organic change of heart, etc. 'Theory' is like 'attitude' in that it is post hoc, implicit, evoked, but not necessarily explicit. No cognition leads inevitably to any given change in 'attitude'; similarly, the class of possible cognitions which can produce a given change in 'attitude' is infinite.

If I had to guess, I would say that terrain is constituted artistic in virtue of . . . structural amnesia. (See Goody and Watt, quoted here.)

... what made either of them art or music? To put it another way, what would art or music have to be for either of these productions to be an instance of it? ...

In its very quiet way, 4'33" represents nothing less than an attempt to dissolve the category of music. It asks of music, as the readymade asks of art: if this is music, then what is not? 18

...

John Cage's score for 4'33" 'Classification . . . ceases when it's no longer possible to establish oppositions.' John Cage (1968/73).19

Yes. Solid workaday metaphysics here.

Now, if Cage actually anticipated question-slinging Fellow Travelers sinking ever deeper into the quicksand by falling for this and then working ever more vehemently to (re-)establish oppositions, then that would be what makes this move truly brilliant.

Otherwise it's not very impressive, because the above-given "facts", if that is what they are, leave plenty of old "oppositions" utterly untouched. By the time you've heard of any avant-gardist, the same can be almost certainly be said of them.

Genuine dissolution of category requires the facts of the case themselves to be unclear or unknowable. That is what seems to happen here, i.e., with the question ' . . . but is it art?' coming to the fore. But again, this is a "question of law, not of fact". The avant-garde cannot make its case without furnishing myriad facts; these facts cannot be furnished without affording the "establishment" of "oppositions"; and then, per Mr. Cage above, these "oppositions" finally result in the formation of "classifications", explicitly if we so choose, but implicitly even if we don't.

If Cage's project really was to confound categories, then he failed in showing himself too fully, too "factually" we might say. That's the danger of trying to get a gig; doubly so of actually landing it.

To be useful, a definition requires limits - things that it excludes. The implication of 4'33", as of the readymade, is that nothing is excluded and thus that any object presented to the eye could fall within the purlieu of art, or any presentation to the ear, whether it sounds or not, could be experienced as music.

In all seriousness, had I been around back then I think I would have totally missed this implication.

I seem to recall a comedy sketch (SNL?) from the GWB years which portrayed GWB as taking a 'where will it end?' position on gay marriage: 'pretty soon people will be marrying their refrigerators.' You could say that this was an "implication" of the pro argument, and you wouldn't be wrong. Or, you could say that no pros never ask for this and no antis take it seriously as a problem, and again you would not be wrong.

"Implication" is another slippery term, like 'attitude' and 'intent'. It takes a certain kind of person and standpoint to draw this particular implication out of 4'33", just as it does to see same-sex marriage as a slippery slope rather than as an exceedingly narrow interest group grievance.

But the formulation 'X is everything' can hardly succeed as a meaningful definition. Thus it must be doing some other work. With Duchamp, I believe it was largely philosophical and metalinguistic. With Cage, thirty six years later, there is more: the question remains, but when it is cast in the light of his deliberate abandonment of intentionality the year before, it also becomes an instruction to the listener to interpret - and thus in large part to create - the work.20 With such mechanical chance procedures an author is at pains to create nothing and to say nothing. This can still function as communication on a metalinguistic level - as a kind of question - but only usefully once. After that, it is just repetition. Cage's use of indeterminacy is clearly not merely to pose a question but to impose a formula that seeks to make dialogue disappear into sets of interpretative monologues.21 If Cage's move is accepted at face value, art becomes a series of riddles to which there are no answers. It also ceases to be a medium of communication and becomes instead an opportunity for perception.

While this notion has borne much fruit, it has also proved highly problematic.

Duchamp's radical proposition - his concept of a work that disappears into the idea of itself - was, I think, in the context in which it first appeared, meaningfully avant garde. Repetition of that question is not. Beyond this, when the primary responsibility for meaning is shifted deliberately from producer to interpreter, any residual notion of an avant garde must inevitably vanish with it. No public can be in advance of its own taste.

There's lots more of this essay, including a few nice insights and a lot of vague wallowing and name-dropping. I'm not sure who this kind of thing is for. Anyway, the hour is nigh, so the remainder will not be 'for' this post.


International Political Theory
The Fall of Postmodernism and the Restoration of Reason

There is a logical disjuncture between viewing everything in a certain context and claiming that truth does not exist because a uniform context does not exist. This is logically false and has caused much misapprehension. ...

...

Postmodernism might appear to be a contemporary idea – an outgrowth of rampant liberalism. However it is not. Postmodernism is very old idea that goes back to the ancient Greek sophists. Then it keeps on persistently reappearing throughout history. ...

Central to the ideas of the ancient sophist was the notion that “man is the measure of all things” which means that all truth is relative. Very soon the logical inconsistency of this claim became apparent. ... Socrates rejects the view of the relativity of truth as a self-contradiction. If one claims that truth does not exist or is relative that means that he makes a statement about truth itself which is the very thing he is trying to oust. ... the claim that everything is relative is itself relative ... [Socrates'] refutation of relativism is still valid and is often reiterated by other philosophers like John Searle.


Balkinization
The Letter

Interesting episode here, and interesting enough comments that I read most of them.










Comments










I think that [Leo] Strauss's criticism of Nietzsche was that you should not write in such a way that dangerous truths become too exciting, too persuasive, and too easy to understand. This seems like a reasonable criticism. Nietzsche was not a German patriot or an anti-Semite, but his extreme anti-egalitarianism and penchant for cruelty would seem to tend toward unwholesome sorts of authoritarian rightism.

# posted by John Emerson : 6:22 PM

Leo Strauss’s theory of exoteric and esoteric writing posits that pre-modern philosophers, to avoid persecution and responsibly impart dangerous truths, hid their true, radical teachings (esoteric) behind a conventional, superficial surface layer (exoteric) meant for the public. This required careful readers to "read between the lines" to uncover the authentic philosophy.

(Google AI Overview, 2 Feb 2026)

The Overview, at least, does not say that Strauss considered esotericism to be the inherently superior tactic. It is superior at least in keeping the hermeneutic treadmill in frenetic motion; that much we know.

Perhaps all truth can equally well be used for good or for ill. Speak it, or don't; from there, still nothing is assured.






I think the reason that Strauss was ambivalent about liberalism is that he, like Neitzsche and Schmitt, had a mistakenly negative view of human nature.

In The End of History and the Last Man, Fukuyama prescribes to the view from Neitzsche, Hegel (Kojeve's interpretation), Hobbes and others that humans have only selfish fundamental motivations, namely material desires and a drive for recogntion from others.

Fukuyama presents Anglo-American liberalism as having the same view. Actually, those philosophers believed, as do most Americans, that human beings also have a whole variety of pro-social, unselfish motivations, as Adam Smith explained in The Theory of Moral Sentiments. And indeed, human society would simply be impossible without these pro-social motives.

If you think that humans are simply selfish, then democracy is not going to seem like a very workable idea, and in constant danger of collapse into authoritarianism. With a fuller view of human nature, democracy, while certainly not perfect, seems like a much better idea.

Now the question is who is right about human nature. As it happens, the various relevant sciences ... have investigated this issue in great detail, and they have determined that the Anglo-American view is largely correct. See for instance Steven Pinker's book, The Blank Slate.

...

# posted by Les Brunswick : 10:53 PM

Yeesh. Why ask Pinker when we could ask Becker instead?

When Becker posits an irreducible 'working level of narcissism' (Denial of Death, p.3), he is being realistic but not fatalistic. Rather than seeking to attenuate this narcissism, the trick is to marshal it in society's favor.

We mentioned the meaner side of man's urge to cosmic heroism, but there is obviously the noble side as well. Man will lay down his life for his country, his society, his family. He will choose to throw himself on a grenade to save his comrades; he is capable of the highest generosity and self-sacrifice. But he has to feel and believe that what he is doing is truly heroic, timeless, and supremely meaningful. The crisis of modern society is precisely that the youth no longer feel heroic in the plan for action that their culture has set up.

(DoD, p. 6)

Optimism about 'human nature' has not gone missing here. What Becker does, rather, is to channel that optimism into the narrow confines within which it is valid: when people share certain collective beliefs, pro-social behavior is self-enhancing and self-sustaining.

you get your recognition from others not on what you take—like the baboons, but on what you give. Among primitives today the main reward of the one who kills the big animal is the prestige of being able to distribute it to his family and to others. Often the hunter himself gets the smallest share or the least desirable part of the animal. Unlike the baboon who gluts himself only on food, man nourishes himself mostly on self-esteem.

(The Birth and Death of Meaning, p. 3)

For Becker there is no mystery about pro-social and anti-social behavior. The mysterious and delicate achievement, rather, is the "hero-system" itself: without "heroic, timeless, and supremely meaningful" collective projects, we have no gifts to give and no one to give them to; all the same under liberal as under authoritarian rule.

The various relevant sciences offered above not only are silent on all of this, they have unleashed a deluge of spurious data and popular confusion.

the laboratory situations in which judgmental errors are demonstrated are not merely artificial in some neutral way. Rather, they are specifically designed to fool subjects. ...it would be trivially easy to design a study that would lead subjects to make correct judgments of experimental stimuli. Unfortunately, such a study would not prove that people reason well in real life, for exactly the same reason that studies of error do not show that people reason poorly."

(David C. Funder, 'Process Versus Content in the Study of Judgmental Accuracy', p. 207)

Continuing in a footnote,

One of the errors most commonly demonstrated in these experiments is the "fundamental attribution error," the putative tendency to overestimate the importance of persons relative to situations in the determination of behavior. Ironically, it could be argued that by attributing errors to shortcomings of subjects...instead of to the deliberate rigging of experimental situations, researchers are themselves committing a particularly grievous instance of the fundamental attribution error.

These remarks were published in 1990, a quarter century before 'replication crisis' entered the lexicon.

Of course Becker's elixir is cognitive through and through ("feel and believe"), but he pursues it not by any "deliberate rigging" but by aggregating anthropological data. 'Anthropology' literally means 'the study of humans', but really the comparison of cultures is the study of "situations"; of people in situations of their own collective design.

This is the lens through which Becker viewed democracy and liberalism. He said only a little bit explicitly about what he saw there, but his oeuvre as a whole suggests much more.

We can see today, I think, that liberal freedoms are highly destructive of collective belief and highly constructive of invidious comparison. Illiberal agitation appears first and foremost as a desperate effort to hold together a polity which by any reasonable standard has become so fragmented as to be ungovernable, namely because no two people quite "believe" in the same things. What people believe in is one thing; the collective aspect (of lack thereof) is something else.

Whether or not Fukuyama accurately portrays Anglo-American liberalism, he is on point when he hones in on the power of invidious comparison at both the invidual and group level. As Becker put it, quoting Alan Harrington, nothing undermines spirited belief quite like "strange individuals who seem to be making out all right". (Escape From Evil, p.113) This is Ground Zero for the erosion of belief: we begin to fear that someone else has it better, and that they have it better because of their strangeness, not in spite of it.

Where Fukuyama goes awry is in thinking that this ensures the global future of liberalism, as one by one people around the world realize it is the best and that they deserve it. This is wishcasting. Becker makes a more direct inspection of what happens when the authority of a culture is challenged from without.

Cultural illusion is a necessary ideology of self-justification, a heroic dimension that is life itself to the symbolic animal. To lose the security of heroic cultural illusion is to die—that is what "deculturation" of primitives means and what it does. It kills them or reduces them to the animal level of chronic fighting and fornication. ... Many of the older American Indians were relieved when the Big Chiefs in Ottawa and Washington took control and prevented them from warring and feuding. It was a relief from the constant anxiety of death for their loved ones, if not for themselves. But they also knew, with a heavy heart, that this eclipse of their traditional hero-systems at the same time left them as good as dead.

(DoD, p. 189)

If all of this is pessimistic, then all of these authors are pessimists. I would say, instead, that Becker's account simply marks out the natural limits on liberal societies' collective ambitions, and that the ambition to cram in as much "strange"-ness as possible is not in and of itself a worthy ambition, nor for that matter is it an essentially liberal one.