Impressions and Links
from NASSLLI 2012.

North American Summer School of Logic, Language and Information 2012.

University of Texas at Austin. June 16-24, 2012.

I had the great pleasure of taking part in Nasslli 2012.
Below you will find impressions from the conference, and links for further reading.

Indeed, it was marvellous days in Austin, where the conference was held at the
University of Texas at Austin.
Founded in 1883, the University Campus is located approximately 0.25 miles (400 m) from the Texas State Capitol in Austin.

The Texas Capitol building was originally designed in 1881 by architect Elijah E. Myers. And it is still a truly magnicent building.

Classes were held at the UTC (University Teaching Center, corner of 21st Street and Speedway).
So, after classes (usually 10 - 12 hours a day) I would walk from the University Campus to the State Capitol and from there, to downtown Austin. A great way to digest the events of the day, while enjoying stunning architecture (See: Youtube Video).

1. Campus life - Go Longhorns!

Read somewhere, that ''walking around the University of Texas campus might bring up a special quality inside of you that makes you feel like you belong here''.

Certainly, I felt that way each morning, as I walked through campus towards the UTC building.
On my way, I would usually watch the Longhorns football team practicing, see students in intense debates with each other, young couples in love holding hands, and groundkeepers making sure that lawns and trees were trimmed perfectly.
Usually, I would stop and read the posters next to the library. Baffled by all the things you could volunteer to do on any given day.
I was quite tempted to volunteer for:
- a Face and Object perception EEG study.
- a Virtual Reality study.
- a Depression study (Sort of interested anyway... ).
- a Decision study. At Maddox lab (UT Austin) at study on ''How do we make decisions''.

All super interesting things!
And if that was not enough, you could always participate in extra curriculum activities like a mud and guts run.
Apparently, a popular thing to do.

Could there be a more fun place to be?

But enough about the surroundings, back to the conference:

2. Nasslli 2012.

Nasslli took place at the University of Texas, June 18-24, 2012.
Each day you could take classes in Semantics, Logic, Computation and Philosophy & Cognition. Each course had five sessions, over the duration of the summer school.
I tried to follow a little bit of everything, but below you will find some of my notes from the Philosophy & Cognition section. Great classes about ''Stochastic Lambda Calculus and its Applications in Cognitive Science'', ''Meaning as Use: Indexicality, Expressives, and Self Reference'', ''Belief Revision Meets Formal Learning Theory'' and ''Possible Worlds: A Course in Metaphysics (For Computer Scientists and Linguists)'' that I followed from start to finish.

Saturday and Sunday, June 23-24, I followed Reasoning and Interaction at NASSLLI (RAIN). A workshop concluding the summer school featuring talks by NASSLLI instructors, participants and other visiting scholars, whose work is related to reasoning and/or interactions among individuals and groups.
And, there were lots of evening events. Including, plenary lectures like: The Turing Test and Its Implications by Michael Tye (Department of Philosophy, University of Texas at Austin).
Saturday evening (June 23rd) I took part in the Turing Symposium, commemorating the centenary of Turing's birth (Turing was born June 23rd, 1912).

All in all it was mind-blowing week. Something to remember for years to come!
Impressions and links (for further reading) follows below:
Nasslli 2012 Lecture. UTC Building, University of Texas at Austin. Simon Laub, Nasslli 2012 Lecture. UTC Building, University of Texas at Austin.
Simon Laub, Nasslli 2012 Lecture. UTC Building, University of Texas at Austin. To weigh in the mind with thoroughness and care what Logic, Language and Information is really all about.

3. Talk: Stochastic Lambda Calculus and its Applications in Cognitive Science.

Noah D. Goodman talked about Stochastic Lambda Calculus and its Applications in Cognitive Science. I.e. a probabilistic language of thought.

3.1. Monday:

Started with some thoughts about logic and probability, key themes of cognitive science that have long had an uneasy coexistence.
Then we were given an introduction to λ calculus - universal calculation. Followed by an introduction to Bayesian nets (Conditional probabilities - We were given some illustrative examples of ''taking out rows'' in matrix systems, followed by recalibration of probabilities).
All a build-up to the introduction of stochastic lambda calculus, a probabilistic language of thought approach that brings together logic and probability.

3.2. Tuesday:

Tuesdays lecture was about models in the Church (computer) language. How a small set of concepts can lead to a huge set of potential inferences. See: Probabilistic models of cognition.

Potentially, things as complicated as:
- Alice believes jumping causes luck.
- Alice wants to win fridays game.
Inference: Alice will jump before the game on friday.

Could be modelled within this framework. See my CogSci 2012, Stochastic Lambda Calculus notes about Causal Models in Medical Diagnosis.

3.3. Wednesday:

Took on statistical learning. Where learning can be viewed as inference from data to hypothesis (In the talk, it was learning about coin experiments).
Normally, we would talk about inference methods like deduction, induction, abduction to learn new things from a given data set. But here we looked at statistical learning, from data set to hypothesis.
In this kind of learning, we can speak of ''what is your inference at a given time'' (There are learning trajectories where the hypothesis we lean towards might change as time moves on). So, we are looking for the hypothesis that best fit the data.
Some (pre-selected) hypothesis are tested on data sets and we follow the learning trajectories.

Interestingly, some of the learning trajectories are very similar to human learning trajectories - Perhaps, indicating the humans goes through some of the same steps (of confusion), as we try to learn what hypothesis (out of a certain set of hypotheses) best fit a certain data set.

3.4. Thursday & Friday:

Different techniques for sampling over a distribution. E.g. run a Church program a 100 times. Wait for it to give a certain result, and then see what the Church program has to say about other (input) parameters of the program. A closer look at Reject sampling, Exact inference, Markov Chains as samplers. Advantages and problems.
Shortly, something about:
- Reject sampling:
Doesn't scale well. One might have to wait a long time to come to a good sample.
- Exact inference:
Scales badly according to state space.
- Markov chains:
A kind of random walk over probable flows through the program. Doesn't depend on size of the state space, as exact inference. And doesn't depend on starting probablities, as reject sampling. See: MCMC_Scratch.

More in my CogSci 2012 notes.

4. Talk: Meaning as Use: Indexicality, Expressives and Self-Reference.

4.1. Monday:

Stephen Wechsler and Eric McCready talked about ''Meaning as use''.
Self identification - what is this. Seeing a representation and recognising that to be yourself
(And, I couldn't help thinking about the Qbo ...).

Thoughts about the Fregean sense of ''I'' (Frege: ''The Thought'', 1918) compared to Sainburys (UT Austin) definition of the ''I'' (Jokingly said: English speakers should use ''I'' to refer to themselfes. That is it! No further explanation are given in his definition...)
Thoughts about ''Theory of Mind'', and rather funny examples on how this can easily go all wrong for people with autism. E.g. a person with autism could say: ''Thanks for inviting you'' etc. Evidentials in Gettier scenarios, where it gets rather confusing what we really know ... :-)

4.2. Tuesday:

Hans Kamp - University of Stuttgart and University of Austin - talked about ''Mental Indexicals and Linguistic Indexicals''
(A short presentation of the story of ''You'' and ''I'').

A closer look a sentences like:
- You think that I don't like you?
or
- If I were you I wouldn't like me.
Where the embedding of the sentence tell a lot about the actual interpretation of the sentence.
And surely, it doesn't help that it is so unclear what we mean by the ''I'', when we interpret these sentences.
Sometimes we even lose the ''I'' - e.g. you get upsetting news one day - and has no ''I'' that day.... Whereas on the next day you cool down - and all of sudden you have an ''I'' again?

Really?
All nice examples of mind boggling complexity from relatively simple sentences.

4.3. Wednesday:

Pranov Arnand, US Santa Cruz, talked about ''Remembering, Imagining and De se, revisited''.
And gave us a lot more to think about in the ''Meaning as Use. Indexicality, Expressiveness and Self-Reference'' - series.

The talk started by looking different kinds of selves. E.g.:

- Self as Constituent (Serving as part of a whole) (Francois Recanati).
- Self as Circumstance (Circumstance are needed for evaluation of the self) (Francois Recanati).
- Self in experimental cases.
- Thematic self.
- Cartesian self (See: Cogito Ergo Sum. Bernard Williams, 1973) - Pure ''ego'', no body, no past.
- Arbitrary self (When we imagine that we could be anyone).

Selves are not simple constructions....

Reflection makes it possible to see the self from different perspectives.
E.g. the process of reflection allows us to go from having a pain, to having a person in pain (the ''Self'' / the ''I'').
I.e. reflection allows us to make explicit what is implicit.

Towards the end there were some (tricky?) comments about Wittgenstein and nocioception (Where nociception, pain detection, allows us to experience pain. The word being connected to ''noxious'' or ''unpleasant'' and ''sensation'').
In Philosophical Investigations Wittgenstein writes about our Private language: Whereas others can learn of my pain, for example, I simply have my own pain. It follows that one does not know of one's own pain, one simply has a pain.

Perplexing, indeed! As the reflection ''computation'' proceeds, new sides of the ''I'' might be revealed...

4.4. Thursday:

Sarah E. Murray, Cornell University, was next speaker in the ''Meaning as Use: Indexicality, Expressives and Self Reference'' - series.

Sat next to a group of her fans in the audience. Easily identifiable btw, as they were wearing identical T-Shirts, with the words ''Underlings - Undergraduates in Linguistics'' printed on them ...A wonderful bunch of people with a great sense of humour, I learned later on (As one of them tried to convince me that the Cheyenne call their tepee for vee'e, and their women he'eo'o, pronounced: he uh-oh... You appreciate the joke more, if you understand that tepee is a Lakota word, not a Cheyenne word...).

Murray gave us some interesting thoughts on what you can say in Cheyenne and what you can say in English. But first we needed to get some facts about the Cheyenne straight. I.e. the Cheyenne live in Montana and Oklahoma.
The Cheyenne language is a plains Algonquian language, which has about 1000 speakers in Montana and Oklahoma.

Then we were given some interesting thoughts about how temporal operators and quantifiers shift context. I.e. how in different contexts an utterance expresses different contents. A few words also about demonstratives and the philosopher David Kaplan. Certainly, lots of stuff to learn about the Cheyenne Language.

She ended her talk by saying she was going back to Montana.
Certainly, it is possible to live the life you always dreamt of, even if you dreamt about living on the great american plains, with the Cheyenne ...

4.5 Friday:

Eric Acton and Christoper Potts, Stanford Linguistics, spoke about ''The latent affective meaning of demonstratives...Feelings in languages''.

A good way to stir up emotions is by evoking solidarity, evaluativity, familiarity, exclamativity etc.
(If you use this sort of language) in a political debate: If you like the speaker, you like her even more. If you hate the speaker, you hate her even more....
E.g. Sarah Palin (makes as) you and she see things the same way, she uses a lots of ''we'' and ''ours''. Actually, it is not just Sarah Palin, most politicians uses these tricks.

Acton and Potts then talked about IMDB film reviews.
And had tried to find words that correlate with good (movie) ratings. Not surprisingly, ''awesome'' is prevalent in reviews that gives a movie a good rating...
On confession sites, like Daily Confession or The Experience Project, users can comment on anonymous confessions.
It turns out that solidarity correlates well with the sentence ''I understand''.
In free texts, it appears that ''Your Use of Pronouns Reveals Your Personality''.
Indeed, some pronouns can be very emotional (sometimes even suicidal):
For instance, when we analyzed poems by writers who committed suicide versus poems by those who didn't, we thought we'd find more dark and negative content words in the suicides' poetry. We didn't, but we did discover significant differences in the frequency of words like ''I'' (See Writing styles).
Interestingly:
Pronouns tell us where people focus their attention. If someone uses the pronoun ''I'', it's a sign of self-focus. Say someone asks ''What's the weather outside?'' You could answer ''It's hot'' or ''I think it's hot''. The ''I think'' may seem insignificant, but it's quite meaningful. It shows you're more focused on yourself. Depressed people use the word ''I'' much more often than emotionally stable people. People who are lower in status use ''I'' much more frequently.
...
It's almost impossible to hear the differences naturally, which is why we use transcripts and computer analysis. Take a person who's depressed. ''I'' might make up 6.5% of his words, versus 4% for a nondepressed person. That's a huge difference statistically, but our ears can't pick it up [1].

5. Talk: Belief Revision meets Formal Learning Theory.

Nina Gierasimczuk (ninagierasimczuk.com) talked about Belief Revision meets Formal Learning Theory.

5.1. Monday:

Started with a glimpse into the rather enormous area of research called Learning Theory (Learning as a quantitative increase in knowledge, learning as making sense or abstracting meaning, learning as interpreting and understanding reality in a different way etc.).
Here the focus was on Formal Learning Theory, how an agent should use observations about her environment to arrive at correct and informative conclusions.
Learning theory (in education), how information is actually absorbed, processed, and retained during learning, were also given a few comments as the week progressed.

According to Gaerdenfors:
Belief revision is a topic of much interest in theoretical computer science and logic, and it forms a central problem in research into artificial intelligence. In simple terms: How do you update a database of knowledge in the light of new information? What if the new information is in conflict with something that was previously held to be true?
What changes (to your beliefs) are rational? We were given a short introduction to the AGM model:
The AGM postulates (named after the names of their proponents, Alchourron, Gaerdenfors and Makinson) are properties that an operator that performs revision should satisfy in order for that operator to be considered rational. The considered setting is that of revision, that is, different pieces of information referring to the same situation. Three operations are considered: expansion (addition of a belief without a consistency check), revision (addition of a belief while maintaining consistency), and contraction (removal of a belief).
Where:
A belief - Is a sentence.
or:
A belief - is a sentence in a formal language.
And the beliefs of an agent - Is a set of such sentences.

It follows: You should stop believing something, if it leads to contradiction....
The AGM revision postulates gives the minimal properties a revision process should have. But there is no proof that it actually works...

In other words - i.e. in terms of possible worlds, the set of possible worlds where your beliefs are true - Introducing a new belief removes a number of possible worlds (The way the Multiverse actually works? :-) ...).

Common knowledge (I know that you know) is in this framework more agents that share the same knowledge.

5.2. Tuesday:

''Elements of scientific enquiry''. A scientist (learner) and nature - the game. Beliefs as conjectures of a scientist.

Thoughts about E.M. Golds (1960) language acquisition theory was presented
(The objective of language identification is for a machine running one program to be capable of developing another program by which any given sentence can be tested to determine whether it is ''grammatical'' or ''ungrammatical''. The language being learned need not be English or any other natural language - in fact the definition of ''grammatical'' can be absolutely anything known to the tester).

A child hear sentences. The child must then come up with a grammar for the language and eventually be able to come up with new valid sentences (See also Chomskys earlier work - before his universal grammar theory).

Notice: In the lecture we were not looking at words. Instead we looked at numbers - In a sense that is just the same, it is just a generalized way of lookings at words, a higher order of abstraction.

I.e. an abstraction of a childs language acquisition could be one Turing machine that generates elements. Another Turing machine must then try to guess the grammar (that operates the first Turing machine) that generates the elements.

Learning theory is here about computable functions (which could come out of a grammar). We cannot learn about something which is not computable.

5.3 Wednesday:

Thoughts on scientific strategies = (what) a class of scientists (learners) does.
According to the course homepage, this lecture was about:
''Rational'' scientists that keep on revising their beliefs in the light of incoming data, starting from some background theory. The inquiry is initiated from a set of formulas and the incoming datum is a formula of the same language first, formulas that are incompatible with the new datum are removed from the belief set (belief contraction), then the new datum is added in (belief revision). We will devote some attention to the process of contraction (maxichoice and stringent contraction).
A (short) closer look at first order logic (and much more). How it (first order logic) could be user by scientists to map facts into classes, structures. I.e. how to make structures in the world through first order logic.

5.4. Thursday:

Was about Inquiry via Belief Revision.
With the subtext: Learning is about how you set your starting beliefs, and how you adjust these beliefs.

Claim: Humans are Turing machines. And then (obviously) learning is computable.
But notice: The best learning startegy is not to stick to your beliefs until they lead to contradiction (Conservative learning).
Instead (interestingly), if you occasionally change (some of your) beliefs, randomly(?), that is a way to optimize learning. It will actually make it possible to learn better!

Kelly, Schulte & Hendricks writes (in ''Reliable Belief Revision''):
A conservative methodologist seeks to minimize the damage done to his current beliefs by new information. A reliabilist, on the other hand, seeks to find the truth whatever the truth might be.
It follows that we would go for the reliabilist approach here.

Obviously - In learning theory we want to go for the truth. But, as everyone knows, there are variations over this theme.
In Goodmans paradox, it is not clear what we should learn:
Goodmans paradox is a paradox of induction. Suppose that someone notes that all emeralds that have ever been observed are green, and argues inductively to conclude that all emeralds are green. Now suppose we define grue as the property of being green up to time t (say, the beginning of the year 2050) and blue thereafter. All our inductive evidence supports the conclusion that all emeralds are grue just as well as it supports the conclusion that all emeralds are green, therefore we have no grounds for preferring either conclusion. Many people (though not Goodman) interpret this as a refutation of induction.
So, if intelligent agents in AI or real life are endowed with some learning theory, which allows them to change their beliefs.
And, the agent's task is to stabilize on a hypothesis for each outcome sequence admitted by the inductive problem.
Then, the question is of course - How effective is a given strategy in finding the truth?

Talk about hard questions (And, there was a lot more on finding the best Belief Revision Method friday)...

Thursday, this was followed about reflections on what is learnable.
I.e. an epistemic (cognitive) space is learnable, if:
- It is identifiable in the limit.
- L (the learning method) is a reliable learning method.
- L (the learning method) can stabilize on truth, but possibly not with certainty.
- There are stable true beliefs in the state space we investigate, defeasible knowledge (things that can be invalidated).

5.5. Friday:

Various Belief Revision Methods were investigated:
- Conditioning - Eliminates all worlds that do not satisfy a condition.
- Lexigraphic upgrade - Upgrades a worlds likelyhood.
- Minimal upgrade (See notes on conservative update above. Actually, Minimal upgrade is not such a good idea).

One of these methods can then be plugged into a Turing machine that needs to learn somehing.

And it (the learning method) is considered reliable, If, at the very least, it:
- Finds the real world in finite time ...
- No matter what the real world looks like ...

6. Talk: Possible Worlds: a Course in Metaphysics (for Computer Scientists and Linguists).

Cathy Legg, University of Waikato, talked about Thought Experiments - Possible Worlds.

6.1. Monday:

The lecture started out by discussing time.

What is time? The class suggested: ''Changes in position of physical stuff'' - I.e. no change, no time...
Then there was a discussion about ''the now'' - What is the now? Various suggestions were discussed. Mostly something which suggested that the now is related to awareness.
And this was just to get the class started...

It was followed by exploring epistemic (cognitive) possibilities.
- I.e. possibilities consistent with what we know.

So, when we explore possible worlds, we explore logical, physical and epistemic possibilities.

Hilarious discussions (between us in the audience) of various (imagined) scenarios followed:
Are they logically, physically or epistemically correct?

We were given Ray Bradburys Sounds of Thunder as homework for the next day.

6.2. Tuesday:

Was another wonderful Time Travel day.
But first, we had a discussion about strict formal logical proofs versus more loose intuitions.
(Interestingly) the discussion (audience) ended up tilting towards the viewpoint that:
Formal logical systems proofs are actually guided by conceivability. I.e. we start from intuitions, which then guides us to more formal proofs.

Then it was back to Time Travel (I.e. our homework, the Bradbury story):
Can you go back in time and kill your grandfather before he even met your grandmother?

Well, according to David Lewis it actually depends on what you mean by ''can''.
You had the gun, the possibility etc. But apparently it didn't happen - so you couldn't?
What is stopping you? ''Do the forces of logic stay your hand?'' (Lewis' phrase)

We were given an introduction to John McTaggart Ellis McTaggert and The Unreality of Time. Where McTaggart argued that time is unreal, because our descriptions of time are either contradictory, circular, or insufficient.

I.e. one could make the case that:
- When there is no change, there is no time.
- The only thing that changes is the tense of events - First it was going to happen, then it is now, and then it has happened.

So, one event is both past, present and future - But that is logical contradiction. So, time does not exist...

David Mellor (Cambridge) thinks the problems comes from our use of grammatical tense (See: Real time. The unreality of Tense).
I.e. David Mellor acknowledges that many philosophers have dismissed MacTaggart's argument, as the conclusion is so outrageous. For Mellor the problem is the unreality of tense.

According to Mellor, the only thing that makes the present moment now is that we are (perceiving it and) calling it ''now''. There are no objective facts of the matter about ''when now is''. There is no ''real now''.
1532 is ''now'' to Henry VIII. 2012 (was) is ''now'' to us.
In other words, ''now'' is ''token-reflexive'' (a.k.a. ''indexical''). That just says:
The meaning of ''now'' is just, ''whatever time the word is uttered at''. Just like:
The meaning of ''here'' is just, ''whatever place the word is uttered at''. Just like:
The meaning of ''I'' is just, ''whatever person is uttering the word''....

There is no objective fact about ''where is here''. There is no objective fact about ''who is I'' (...). In the same way, then, there is no objective fact about ''when is now''...!

I.e. Mellor, like Lewis, is taking a ''four dimensional'' (4-D) perspective on time, ''block universe''.
Those who wish to oppose this view by claiming that there is a ''real now'', are often called presentists.

6.3. Wednesday:

Was all about Logic and Reality.

The day started with a question: Is logic a priori to physics - or is it the other way around?
Interestingly, in the context of the Possible Worlds lecture series, (some) people (class discussion) seemed to be leaning towards saying that logic is a priori to physics.

And, apparently, some physicists sees it the same way:

Another Time discussion was to follow:

Following McTaggerts thoughts on time (Without real change, there is no real time: ''Our ground for rejecting time, it may be said, is that time cannot be explained without assuming time'') and Aristoteles definition of time as ''the measure of change'' we took a closer look at Sydney Shoemakers experiment (A possible world, where time passes, but where there is no change).

In Shoemakers argument there can be a local freeze on a region of the universe where everything stops...
Time moves on (in other regions of the universe). A freeze takes place in one part of the universe every 3 years, in another area every 4 years, and in the remaining part every 5 years.
The inhabitants of this strange world quickly become aware of the local freezes, and they have no trouble calculating the ''freeze function'' for each of the three zones. What's more, they also calculate that there is a global freeze - a period during which each one of the three zones undergoes a local freeze - exactly once every 60 years. Whenever a global freeze occurs, of course, no one is able to see any frozen objects or blacked-out zones, since everyone and everything is frozen at the same time.
According to Shoemaker:
What the thought experiment does seem to show, however, is that it is possible for rational beings to have at least some evidence for the existence of periods of empty time in their world
And a possible world that appears this way to its inhabitants is surely a world in which those inhabitants have some reason to take seriously the possibility that there are periods of empty time in their world, that they know when those periods occur, and even that they know exactly how long the periods of empty time last.
Shoemakers argument: Science should always choose the simplest argument. Which would here be that times moves on, even though that there is no change.
We can imagine time without change - therefore it is possible (in the possible worlds), according to Shoemaker.
Logic is a priori to physics!

We then went on to discuss causation:
According to David Hume, ''causation is the cement of the universe''.

Where causation is defined as ''things that make other things happen''. Its a relationship with holds between two events. And, Surely, our understanding of causation is a very large part of how we structure our understanding of reality.
But - like time - causation is not so easy to get a handle on. Actually, we do not get insights into causal necessity - but causal regularity.

The apple falls to the ground - not because it has to (Because of gravity, it has to, but there could be exceptions in the laws of gravity we just don't know about yet) - but because that is what apples mostly do.
According to Cathy Legg:
One of the main themes of Hume's philosophy is that we imagine that we know all kinds of things through reason that we really don't know through reason.
How then do we know those things? Merely via custom and habit. I.e. regularities in our experience.
But doesn't that mean that if we take Hume seriously we have to say that we know that events where an axe hits a window have always been followed by events where the window smashes in the past (i.e. we know the regularity), but nevertheless, we don't know that the next time an axe hits a window the window must smash?

Yes (!)
And, we should be careful:
And try to avoid Ephiphenomenons like:
Ice cream causes shark attacks (in Australia in the summer).

6.4. Thursday:

Identity was thursdays thought-provoking theme.
It started out relatively peacefully though. With a comment that it is not easy to get an overview over philosophy or human knowledge. Still, people have tried:

According to Hume: All knowledge falls into two categories:
1. Relations of ideas.
2. Matters of fact.

Digging deeper immediately brought us into trouble again... (So much fun actually). What are facts really?
And when are things the same, and when are they different.

We have two kinds of Identity:
Qualitative Identity: All the properties are the same, all what we measure on the objects are the same.
Quantitative Identity (sometimes also called ''numerical''): Two balls in an empty universe - are qualitative identical, but quantitative distinct.

According to Leibniz' law:
The Indiscernibility of Identicals:If two things are identical (quantitatively) then they share all their properties.
and: The Identity of Indiscernibles:If two things share all their properties then they are identical (quantitatively).

In Cathy Leggs words:
Qualitative and Quantitative identity summed up:
You can have qualitative identity without quantitative identity insofar as two things might share all their properties (even, in some sense, their spatiotemporal location in a symmetrical universe) and yet still be ''different'' in the sense that this is THIS thing and that is THAT thing (!)
Metaphysicians sometimes express this idea by saying that the two things have different thisnesses. The traditional medieval Latin term for this is: Haecceitas.
Next up was a discussion (E.g. see this presentation) about identity and time for a person.
Would we sign up for the perdurantist or the endurantist world view:
- Perdurantism (four dimensionalism): Maintains that an object is a series of temporal parts or stages.
- Endurantism: Objects are wholly present at every moment of their existence.

I.e.
In endurance theory a person is wholly present at each moment of time.
If time does move, then a whole person will move along with all his body parts.
You are before, now, and tomorrow.

In perdurance theory you are not wholly present at this moment...
The whole you is thus expanded in time. You are all the moments or stages of your life as it moves along, but never totally you because part of you is in the past (or future).

By now it should come as no surprise that a discussion followed.
Parts of the audience liked ''perdurance theory'', as this gives persistance (of something of ''you'').
Others disliked ''perdurance'', as (according to this part of the audience) ''the mental construct of an idea of ego, it's just that, a construct that is malleable. There is no core you. We literally recreate our vision of who we think we are each waking day, from memory. And memory changes.''

Oh well, perhaps both the perdurantist's and endurantist's had some good points?

6.5. Friday:

After thursdays discussion, members of the audience had apparently spent a sleepless night pondering questions about identity and time. And now they insisted that Cathy Legg revealed were she stood on these question....
She explained, that she is an Actualist (Possible worlds are mere descriptions of ways this world, the actual one, might have been, and nothing else) and a Presentist (Events and entities that are wholly past or wholly future do not exist at all).

Apparently that calmed people down a bit and we could continue with todays lecture...
Well, not until I had argued the case what is in consciousness is ''real'' ...?
Which she countered by answering that hallucinations, dreams and so forth aren't real ...

Then we could continue (on the subject of personal identity) with a warm-up exercise:

Are you the same
- as you were 3 years ago?
- if you change your memories?
- if you change your parents?
- if you change your DNA?

And (obviously) great new discussions.

It was then onwards to Rene Descartes and the hoped for foundation for all knowledge (Cogito ergo sum).
In the Cartesian view we are not our bodies:
- Why not? He claims his body is ''separable'' from him.
and not our sense-perceptions.
- Why not?....Once again, he can imagine himself separate from them.

Eventually we get to: ''I am, then, in the strict sense, only a thing that thinks''.

According to Cathy Legg:
This ''disembodiment of the self'' is an extremely influential moment in modern Western philosophy. It is the basis of a so-called ''Cartesian dualism'', which sees the mind and the body as different substances.
Today, the Cartesian dualism between matter and spirit is widely rejected, by e.g. materialism. Yes, surely everything is made of matter - but can you really ''cut an idea in half'', and say it is made of something else?

For Locke a person is:
A thinking intelligent being that has reason and reflection, and can consider itself as itself, the same thinking thing at different times and places.
Where a key component for personal identity is psychological continuity (Defined as continuity of memory: ''As far as this consciousness can be extended backwards to any past action or thought, so far reaches the identity of that person'').

This gives all sorts of problems though:
- Can you externalize your memories? And would that be ok for personal identity?
- And what about identity and the flow of time:
Identity must be transistive. But that does not go for memories!
A 70 year old man can remember himself being 30 years old. And a 30 year old can remember being 7 years old. Perhaps an instance of being spanked as a 7 year old. But maybe the 70 year old has forgottan that incident.
Then there is no transistivity on the memories! And therefore no identity.....
(at least only a certain pecentage of identity) ?
- And couldn't there be more than one entity who is psychologically continuous with me, who has my memories?
Here, Locke would ask us to bite the bullet: Sure, personhood can branch...

Parfit has tried to replace talk of personal identity with talk of survival (which can be more or less, instead of all or nothing, when we talk about personal identity).
I.e. psychological connectedness (P1 remembers most of P2s life) and psychological continuity (There is a chain of psychological connectedness from P1 to P2).
According to Cathy Legg:
He claims the question of personal identity is not a substantive question. Just like the question of ''country identity'' is not a substantive question.
There might be even further revisions to this (psychological continuity vs. bodily continuity).
Apparently, it is possible to set up possible worlds that demonstrates that psychological continuity is more important to us than bodily continuity (Even if the experiment is set up in such a way, that it will try to hide the psychological discontinuities from us). We care a lot about psychological continuity, and it is simply not pleasing for us to look at this from a survival mode (which can be more or less).
Nota bene: In my mind the Julian Baggini SpaceQuiz plays with these issues in a very interesting way.
Here, in this lecture, we were introduced to a Bernard Williams thought experiment:
Someone, in whose power he is, tells him that he is going to be tortured tomorrow. He is frightened and look forward to tomorrow in great apprehension.

Do any of the following additional pieces of information make him feel better:
a. Something will be done to make him forget the announcements.
b. He shall forget all the things he now remembers.
c. He shall have a different set of memories about the past.
d. The set of memories will be those of another person B.

According to Williams: ''Fear, surely, would still be the proper reaction''.

Interestingly, in this last possible world exploration about ''The Self and the Future'', Bernard Williams lets us focus on things that are done to the body, yet a mentalistic view of the self emerges (in our fears)?
According to Cathy Legg:
Bernard Williams has crafted a brilliant thought-experiment which calls into question the whole idea that the key determinant of personhood (whether identity or survival) is psychological continuity, not bodily continuity.
So, which is right? The relation that makes us the same person over time, is it psychological continuity, or is bodily continuity more important?
Apparently, it is possible to persuade listeners to believe different interpretations and make different choices.

In absence of a competitor, self-identification tends to follow the body; but when there are competing targets for self-concern, psychological continuity can trump bodily continuity.

And so this brilliant lecture series ended.
Certainly, not with any clear-cut conclusions about what a person really is... :-)

7. Rain (Reasoning and Interaction).

7.1. Rain Talk: Probabilistic pragmatics: Languange understanding and social inference
& Communicating with epistemic modals in stochastic lambda-calculus.

I started saturdays RAIN workshop with Noah Goodmans (Stanford University) talk about Language understanding and social inference. Which was about modelling social reasoning (inferring someones state of mind and figuring out a rational action).

In a simple blocks world (A world with a few different blocks with a few colors), we were introduced to bayesian reasoning models, where it was possible to predict what a speaker would say to describe these blocks, when someone pointed to one of the blocks.
Surely, the end-result of running the models weren't all that surprising. Still, encapsulating the mental reasoning behind even this simple task surely wasn't trivial.

Getting into real speech acts - I.e. predicting the words that bob is going to say next
(Under the assumption that a speaker should be informative and brief, where being informative means minimizing surprise):

P (words | world )

is obviously going to be way more difficult.
But here it was demonstrated that the bayesian reasoning might hold some clues to what might be going on, when humans reason about what to say next.

Noah also gave some interesting thoughts on hyperbole and halo. Sentences like: ''I'll be there in 30 minutes''.
Where round numbers are used vaguely, whereas non round numbers are precise.
Indeed, one could use probabilistic inference to analyze such sentences - and actually come up with an idea about what humans mean when they say such things.

Daniel Lassiter continued with another talk about computational models of communication and reasoning (Communicating with epistemic modals in stochastic lambda-calculus).

Where communication is actually just another way of learning, getting rid of some possible worlds.
Daniel Lassiter also presented some thoughts about Shannons dream of figuring out how information best can come from your head to mine.

7.2. Rain Talk: Dynamic Doxastic Attitudes as Strategies for Belief Change.

Alexandru Baltag (Ben Rodenhauser and Sonja Smets) from Amsterdam University then gave a talk about Dynamic Doxastic Attitudes as Strategies for Belief Change.

We can have trust in various degrees. We can ''trust'', ''semi-trust'', ''distrust'' or ''mixed trust'' someone.

And even for the same source we might have differnt views. Below: You might believe 1 but not 2:
1. Noam Chomsky's views on binding theory.
2. Noam Chomsky's views on foreign policy.

Dynamic attitudes are then strategies for belief change.
Examples of such dynamic attitudes:
Infallible trust (If you receive information from such a source, you will update your beliefs , unconditionally).
Strong trust.
Minimal trust.
Neutrality.
Isolatism (Whatever the source tells you - the system (you) breaks down.
I.e. You cannot receive information from this source).

Obviously, these states influences how you should update your beliefs.
E.g. Information from a liar: You receive P, but you will now believe non P.

8. The Turing test and its implications.

Thursday evening (June 21) Michael Tye (Professor of Philosophy at the University of Texas at Austin) gave a talk about The Turing test and its implications.
(Tye is perhaps best known for his book ''Ten problems of consciousness''. See Amazon).

As part of the introduction we were all reminded that ''when you see a boat,remember it was built by someone''.
This, as a thanks to the organizers of Nasslli 2012. But also, generally, in the world of computation, as a thanks to Turing and other founders - that they should be remembered by us.

8.1. Apples.

Though the forbidden fruit in the Book of Genesis is not identified, tradition has it that it was an apple.
Later, the apple became a symbol for knowledge, immortality, temptation, the fall of man into sin, and sin itself. In Latin, the words for ''apple'' and for ''evil'' are similar (malum ''an apple'', maalum ''an evil, a misfortune'').
It follows, that taking a bite of an apple could be intrepreted, as a symbolic way of representing someone being introduced to more knowledge (about good and evil, about sin).

It should come as no surprise that apples are also a part of the story about Alan Turing!
According to a BBC page about Alan Turing:
His housekeeper famously found the 41-year-old mathematician dead in his bed, with a half-eaten apple on his bedside table.
It is widely said that Turing had been haunted by the story of the poisoned apple in the fairy tale of Snow White and the Seven Dwarfs, and had resorted to the same desperate measure to end the persecution he was suffering as a result of his homosexuality.
And what about the Apple Computer logo?
Is that a reference to Turings suicide?
When Steve Jobs was asked about it in one interview he just smiled (Knowingly? Still, in other interviews Apple has denied any links. The english actor and journalist Stephen Fry has recounted asking Steve Jobs whether the design was intentional, saying that Jobs' response was, ''God, we wish it were.'').
And, according to the BBC page:
Prof Jack Copeland believes the alternative explanation made at the time by Turing's mother is equally likely: Perhaps he had accidentally put his apple into a puddle of cyanide.
Or perhaps, more likely, he had accidentally inhaled cyanide vapours from the bubbling liquid.
But Jack Copeland argues the evidence should be taken at face value - that an accidental death is certainly consistent with all the currently known circumstances.
In the end we might never know whether it was suicide or an accident. The BBC page suggest that ''Perhaps we should just shrug our shoulders, and focus on Turing's life and extraordinary work''.

True. The only thing we know is that Alan Turing's death and apples were somehow connected ...

8.2. Turing test = the imitation game.

After the introduction about Turings death and apples, Michael Tye then turned to the Turing test.

Is it true that
A computer is intelligent if and only if it can pass the Turing test?
Sure, sometimes, computers can fool people into thinking that they are actually people. The BotPrize competition challenged programmers to create a bot for UT2004 (a first-person shooter) that could fool opponents into thinking it was another human player.
And apparently, the winner of the BotPrize was very successful. And came up with bots that managed to appear more human than human gamers (playing like fallible human gamers rather than near-perfect computer AI).

But obviously, the Turing test covers a much larger area of knowledge that just playing a restricted video game. Still, it might be possible.

And the computer might be helped by poor human judges... I.e. Who is the judge in the Turing Test? An expert in AI? Or, supposse the judge is really stupid? Obviously, it makes a difference!

Eliza (the Rogerian psychotherapist) was an early attempt of making something which could supposedly pass the real Turing test.
Using almost no information about human thought or emotion, ELIZA sometimes provided a startlingly human-like interaction. Eliza, like Rogerian therapists, rely on taking a passive role, and engaging the patient in the conversation by reflecting the patient's statements back at her by rephrasing them into questions. If nothing else seems to fit the program's scheme, ELIZA always has a bunch of fixed phrases to keep the conversation going, such as ''Very interesting. Please go on.'' or ''Can you elaborate on that?''.
A conversation between Eliza and a young woman:

Young woman: Men are all alike.
Eliza: In what way?
Young woman: They're always bugging us about something specific or other.
Eliza: Can you think of a specific example?
Young woman: Well, my boyfriend made me come here.
Eliza: Your boyfriend made you come here?
Young woman: You don't argue with me.
Eliza: Why do you think I don't argue with you?
Young woman: You are afraid of me.
Eliza: Does it please you to believe I am afraid of you?

Etc.
And really, why not, how do we know that we, ourselves, are not just a bag of tricks like Eliza?

8.3. The Turing Test and Internal States of a Turing Bot.

Michael Tye wasn't to happy with ELIZA-like intelligence though.
In his opinion, performance on the Turing test is no guarantee of intelligence.
Why?
Imagine a machine that has a database of strings. When a judge type in a string to the machine, the machine doesn't ''think'', but lookup a reply from an enormous database? Not good enough (to demonstrate real intelligence) according to Tye.

According to Michael Tye, IQ is mental competence - not just behaviour!?
Imagine a gigantic lookup tree for a bot-person
(Or something more advanced, say something like Paul S. Rosenblooms attempt to build a functionally elegant, grand unified cognitive architecture system in support of virtual humans and intelligent agents/robots).
At least, in Tye's argumentation, if such systems don't have internal states they are not really intelligent (Nota Bene: Rosenblums system would probably have internal states though ...?). It is not enough to have the right outputs, a really intelligent system must also have internal states.

E.g.:
Think of a number - multiply with 2 and add 6. The real Turing would engage in rational thought.
A Turing Bot would lookup the string and give a response. The ones doing the thinking are the ones who have constructed the bots strings. Not the bot.

BTW. In the Turing game the string lookup method would work out fine. Input strings are of finite lengths - therefore the list of strings are not infinite. Generally, strings over sentences in the english language would be infinite - but not if the strings are limited to a certain length (what can be typed in by a judge on a keyboard in say 2 minutes).

Michael Tye concluded his talk by saying that ''Real intelligence must have beliefs, desires, internal states - that sort of things''. ChatBots without internal states aren't really intelligent, even if they might appear so to us at a first glimpse.

8.4. Questions.

It was then time for questions.
And it soon became apparent that a large segment of the audience weren't too happy with that ending to the talk...

Audience 1: So, we can only have intelligence, if we clone human brains? Intelligence cannot be constructed in another way?
Audience 2: One universal Turing machine maps to another universal Turing machine. Why would one design be better than another? If that is what is implied here?
etc. etc.

A lot of discussion followed.

In my mind, some of it was surely a repeat performance of discussions about John Searle's ''Chinese room'' thought experiment. In the ''Chinese room'' thought experiment, Searle noted that software (such as ELIZA) might pass the Turing Test simply by manipulating symbols of which they had no understanding. Without understanding, they could not be described as ''thinking'' in the same sense that people are thinking. Searle ended up concluding that a Turing Test cannot prove that a machine can think.
Which would leave other debaters rather angry, especially if a machine performs perfectly and answers every question with grace - What more could you ask for? And couldn't you just as well use Searles arguments against a human?

Surely, comparing a machines behaviour with human behaviour leads to all sorts of problems:
Is the interrogator's judgement reliable, what can we really infer from comparing only behaviour and really, what is the value of comparing a machine with a human, if they are entirely different things?
Because of these and other considerations, then well, maybe we shouldn't read too much into the Turing Test. Maybe it is just a cool test to pass...and not much more...?

Here, it would have been interesting to hear from Turing himself.
What might he have said after this talk?

9. Turing Centenary Symposium.

Turing Symposium.
Saturday, June 23 2012 - AT&T Conference Center, the Amphitheater, Austin.

Alan Turing, inventor of the Turing machine and one of the earliest pioneers of computer science and artificial intelligence, was born June 23rd, 1912. To commemorate the centenary of Turing's birth, Kevin Knight, Bob King & Bruce Sterling had been invited to speak at the AT&T Conference Center in Austin.
The lectures were followed by a panel discussion, chaired by David Beaver, Director of Nasslli 2012.
Kevin Knight (University of Southern California) speaks at the Turing Symposium.

9.1. Language Translation and Code Breaking.

Kevin Knight, University of Southern California.
Talked about Language Translation and Code Breaking.

In Computing Machinery and Intelligence. Mind, 59, 433-460 Turing proposed a game to decide whether computers are intelligent or not (See: Turing Test). In the game, a computer and a human should both try to attempt to convince a judge (Sitting in front of a computer screen, receiving only text messages from them) that they too are conscious, feeling, thinking things.
Where the implication of the Turing is that If a machine acts as intelligently as human being, then it is as intelligent as a human being....

Obviously, people (back then and now?) weren't to crazy about such argumentation.
So, in Arguments from Various Disabilities Turing writes:
These arguments take the form, ''I grant you that you can make machines do all the things you have mentioned but you will never be able to make one to do X.'' Numerous features X are suggested in this connection, I offer a selection:
Be kind, resourceful, beautiful, friendly, have initiative, have a sense of humour, tell right from wrong, make mistakes, fall in love, enjoy strawberries and cream, make some one fall in love with it, learn from experience, use words properly, be the subject of its own thought, have as much diversity of behaviour as a man, do something really new.
...
Usually if one maintains that a machine can do one of these things, and describes the kind of method that the machine could use, one will not make much of an impression
(Editors note: I.e. when a method is described (whatever it may be, for it must be mechanical) it is considered to be really rather base, not that intelligent).
In such criticism, it is assumed that a machine cannot have much diversity of behaviour.
Turing counters this by saying: Not much diversity of behaviour is just a way of saying that it cannot have much storage capacity...

And surely, Turing believed that machines would eventually have lots of storage space (!). And be able to do all sorts of clever things, like (the subject of this talk) learn languages, translate between languages, do cryptography and much more...

Foreign languages: According to Kevin Knight: A foreign language, is really just english - just coded...
So, whenever you arrive in a foreign land, and do not really understand any of the signs, you should remember that the inhabitants really speak english, they have just coded everything, and now you have to decode it!

Some things you might be able to figure out on your own. Other things statistical machine learning might be able to help you with.
Actually, a lot of the problems Turing faced breaking the Enigma code are still very useful in machine language translation
(Trivia: Bob King reminded us that Mick Jagger has a keen interest in the history of the real-life Enigma project, and even owns one of the original surviving Enigma machines. See [2]).

Knight then presented various techniques used in machine translation (Rule based translation, Statistical machine translation with the use of parallel texts, word and phrase alignment etc). It was not all Statistical Machine Translation though. Knight also presented some interesting thoughts on howto make translations between languages without parallel texts (Where the EuroParl, the record of the European Parliament, is one key provider of parallel texts).
Using a combination of techniques might end up giving the best results. E.g first, a translation might use a rules based engine, then statistics might be used in an attempt to adjust/correct the output from the rules engine.

Knight the presented some fun problems for us to work at:
- Voynich Manuscript (1404-1438).
(It has been studied by many professional and amateur cryptographers, becoming a cause celebre of historical cryptology, but is not deciphered yet! (Even though) a large internet community is working on cracking it. E.g. see here or here).
- Zodiac killer (1967).
(After the decryption of his first code, the Zodiac killer sent many more communications to law enforcement and the media. Including his most famous: a 340-character cipher, mailed to the San Francisco Chronicle. To this day, the cipher has not been completely cracked. Even though some believe it is just a highly advanced Caesar code, a substitution type cipher, where an encoder has ''simply replaced each letter in a message with the letter that is three places further down the alphabet, or some similar replacement'').
- FBI cipher (1999).
(Now crowdsourced. You can send in your suggestions).

Finally, Knight gave us a short outline of the story of language translation by machines so far:
Before 1990 - Rule based MT dominated.
Now - Statistical MT dominates.
In the future - Higher understanding of languages will be included in the machine translations.

It was then time for questions from the audience:
Q: So, why can't we now prime such systems with grammar knowledge, and all the stuff linguistics know?
A (Kevin Knight): It is not so easy to feed all of these rather complex rules and understandings into the system.
David Beaver, Director of Nasslli 2012, speaks at the Turing Symposium.

9.2. Alan Turing: Genius, Patriot, Victim.

Next up was Bob King, with a talk about Alan Turing: Genius, Patriot, Victim.

1950 - Life was looking good for Alan Turing.
1954 - Just a few years later he dipped an apple in cyanid and killed himself. See the whole story above
(Trivia: Sadly, he was 41 - not 42, as Douglas Adams would have liked - when he died).

So what happened?

He was born in 1912.
He belonged to what Orwell called the lower upper middle class.
He was a good runner. And almost qualified to the 1948 Olympics in London as a marathon runner.

Not surprisingly, Turing was an odd person.

And, actually, come to think of it, many brilliant mathematicians are like that ...
King mentioned Ramanujan, who also died rather young. And certainly had this Out of this world quality.
The english mathematician Hardy described Ramanujans work with these words:
These equations must be true,
if they were not true, noone could come up with them ....
Only 32, he died of TB = loneliness and malnutrition.

Turing of course had some of the same odd qualities. That odd combination of weird and genius:
E.g: Once, Turing showed up to a tennis match wearing nothing but a raincoat. Nothing underneath. His opponent thought that was rather distracting.
And that was not all: He conducted an active homosexual life, he had a strange voice, a funny laugh and nails of different lenght etc.
Turing was odd.

Certainly, Churchill thought that the people at Bletchley Park (where Turing worked during WWII) were odd.
Speaking to the director at Bletchley Park Churchill said:
I said leave no stone untouched, to find the right people. ...
I didn't mean you to take this litterally...
But with his code-breaking skills Turing was also a national hero.

Sure, homosexuality was a criminal activity at that time, just after WW2, in Britain.
Still, one wonders, why was Alan Turing punished for his homosexuality? While so many other obvious security risks got away with it (E.g. see Blunt and Burgess biographies).
Well, the upper class could get away with homosexuality.
But Turing was not upper class. He hadn't close friends in high places that could discretely help him.
And he had moved away from Cambridge and Oxford to Manchester that were much less tolerant of such things.
So, he was forced to take medication against his homosexuality.

And finally, the story ended with his housekeeper finding him dead in his bed, with a half-eaten apple on his bedside table.

Bruce Sterling
at the Turing Symposium.

9.3. Turings Strange Sea of Thoughts.

Bruce Sterling was introduced as one of the founders of the cyberpunk movement in science fiction, winner of the Clarke award (2000), and author of the books (among others) ''The Difference Engine'' and ''Zeitgeist''.

Sterling talked about Turings strange sea of thoughts.

Turing was odd: Sterling reiterated the odd theme:
Turing didn't connect well with other people. He had parents on other continents. He had trouble finding human warmth.

And what about us? Do we like Alan Turing?
Imagine if he was german. Making better german codes. Instead of shortening the war with 6 months. He would then prolong it with 6 month. Imagine if his name were Alan Turingstein.
Wouldn't he then be a cape wearing villain?

But we like to see him as one of us. We like to imagine that we are not hostile to people like him anymore. That we as a society has progressed a lot.

But, obviously - we haven't.
We are still hostile to people who are very different. Whose work will not be appreciated for 30 or 50 years.

Turing Test: Sterling then went on to talk about the Turing Test:
Maybe it was not really about intelligence.
Lets imagine it was about gender. Lets imagine that it is an artificial computational system that wants to be a woman!
So, how can we help this computational system be a real woman? Help the artificial be real.
Help a guy to be a real woman. And really feel like a woman does. Feel like a mother. Feel the wind like a woman feels the wind. etc.
Actuality, sexuality is older than intelligence biologically speaking.
Intelligence rides on gender.
So, figuring out gender might actually be a much deeper problem than intelligence?

Machines that kill themselves: But we don't like these thoughts. And when we think about artificial intelligences we certainly don't want them to be anything like Alan Turing!
We don't like to think about Artificial Suicidal Systems. We like IQ - but not suicidal artificial intelligence.
We want machines that can inspire us towards more intelligence and awareness, not towards ''non-awareness''.

We do not like machines that kill themselves, when they find out that they are not real... or that they may never know what it feels like to be a woman.

Concluding remarks, about the future: So, what should we hope for in the future?
Some, like physicist Steven Weinberg, hopes that in the year 3.000 humans will still remember and appreciate Shakespeare. Sterling didn't think that was all that important.
Instead, Sterling hopes that (a thousand years from now) there will be people around who are just as creative as Shakespeare. According to Sterling, Shakespeare (26 April 1564 - 23 April 1616) will not be all that important 1.400 years after his death (Well, lets wait and see...).
Authenticity might be much more important, according to Sterling, that we don't get lost in the digital age.
Indeed, now we are just beginning to wrestle with the problems of the digital age (What is real and what is not). The very problems Turing wrestled with in the Turing Test.
And, surely, we need more philosophers and metaphysicists to help us out...

Question from the audience: When do you call a person phony?
Sterling: When there is no authenticity to it. When there is too much photoshop going on. Thats were we are.

Question from the audience: What about machine translation? Can you translate without knowing, without feeling?
Knight: Machine translation still loses a lot of meaning.
Take poetry - most machine translation systems are still very bad at this.

And on that note this super exciting night came to end!

10. Appendix: Simulations within Simulations.

In the digital age it might not be so easy to decide what is real and what is not.
Indeed, Bruce Sterling concluded his talk (above) by saying that authenticity might be a key concept in the digital age.
But certainly, figuring out what is real and what is not, is going to quite a challenge for us in the future:

According to Nick Bostrom we might be living in a simulation (see my simulation post).
Actually, we already live within a simulation, the one our brains makes for us to live in...
- The reality the brain presents to us obviously a simulation. According to Thomas Metzinger: ''Our brains create a world simulation, so perfect that we do not recognize it as an image in our minds'' (For more, see my CogSci 2012 notes from Lawrence W. Barsalou's keynote and my review of Metzingers book, The Ego Tunnel).
And, it truly becomes Simulations within simulations, if/when humanity begins to build simulations or games that have conscious observers within them...

11. Appendix: Authenticity.

After the Sterling talk (above) some of us got involved in a brain-twister discussion about authenticity.
It started with a claim that ''Surely, physics tells us what is real''....
Well, for some people, physics is really just simulation. According to Kevin Kelly in Wired Magazine:
Every bit - every particle, every field of force, even the space-time continuum itself - derives its function, its meaning, its very existence entirely from binary choices, bits.
If this sounds like a simulation of physics, then you understand perfectly, because in a world made up of bits, physics is exactly the same as a simulation of physics.
If the universe in all ways acts as if it was a computer, then what meaning could there be in saying that it is not a computer?
Ok, then Information is real! Well, in Information, Physics, Quantum: The Search for Links John Archibald Wheeler writes:
To endlessness no alternative is evident but loop, such a loop as this: Physics gives rise to observer-participancy; observer-participancy gives rise to information; and information gives rise to physics.
Interestingly, Wheeler was optimistic that we would someday understand it all:
Surely someday, we can believe, we will grasp the central idea of it all as so simple, so beautiful, so compelling that we will all say to each other, ''Oh, how -could it have been otherwise! How could we all have been so blind so long!''
Indeed, as Sterling said in his speech, in the future we will need more philosophers and metaphysicists to help us out!


-Simon

Simon Laub
www.simonlaub.net

Enactive Cognition Conference (Reading 2012) | Cogsci 2012
About www.simonlaub.net | Site Index | Post Index | Connections | Future Minds | Mind Design | NeuroSky | Contact Info
© July 2012 Simon Laub - www.simonlaub.dk - www.simonlaub.net
Original page design - July 2nd 2012. Simon Laub - Aarhus, Denmark, Europe.