Profile

megpie71: 9th Doctor resting head against TARDIS with repeated *thunk* text (Default)
megpie71

January 2025

S M T W T F S
    1234
567891011
12131415 161718
19202122232425
262728293031 

Tags

Style Credit

Expand Cut Tags

No cut tags
Sunday, February 25th, 2018 08:00 am
1) We don't know what intelligence is. The best definition we[1] have for intelligence goes "intelligence is the thing which is measured by intelligence tests", which means our definition is circular with the testing instrument, and we don't have anything concrete to point at. Instead, "intelligence" turns out to be a grab-bag of characteristics including such things as cultural background, acquired learning style, visual and auditory acuity, visual and auditory recall, problem recognition skills, problem solving skills, and memory storage and retrieval time. This may account for why Western nations have problems acknowledging cultures other than their own can be just as intelligent as they are, why we can manage to write off the potential intelligence of entire human socio-cultural groupings (whether we label them as "races" or not) and an entire gender without a blink, and why we still have problems on an individual level acknowledging the intelligence of people who don't share our ideas completely. The working definition of "intelligence" most people use still tends to include the phrase "agrees with me about matters I declare important" somewhere in the mix.

2) Given we already have problems recognising intelligence in other humans, we're going to have even more problems recognising it in non-humans. We're starting to recognise some of the grab-bag of characteristics we refer to as "intelligence" in animals - and the rule here is that the more human-like or human-attractive an animal looks, the more likely we are to recognise signs of intelligence. So chimpanzees and bonobos (our closest non-human relatives) get recognised as being capable of intelligence pretty quickly, so too do most of the great apes. We recognise the intelligence of cute animals like dogs; cats; elephants; and dolphins. We're even starting to recognise intelligence in non-mammals - African grey parrots; crows and other corvids; shrimp; octopuses. But it takes us time as individuals and as a culture to recognise this, because the intelligence of animals doesn't necessarily present in ways we're familiar with. We're also generally testing for the things we recognise as proof of intelligence - things like problem-solving ability (which is what we tend to recognise easiest) and tool manipulation. Our definition of "intelligence" tends to be highly contextual in that we're looking for things these animals do which fit the human context, while largely ignoring the fact these animals aren't living in a human context to begin with[2].

3) Even among humans of our own culture, what intelligence starts out as is nothing like the thing it eventually becomes. It requires at least twelve years run-in time before a human being is physically capable of firing on all cognitive cylinders, and able to function cognitively as an adult (it then requires at least another six to eight years for the intelligence to accustom itself to being constantly bathed in the hormonal cocktail we think of as "adulthood"). For the first twelve months of that time, communication with external intelligences is chancy, and requires extreme familiarity to achieve accuracy. For the next six to twelve months after that, familiarity still aids comprehension, but it's possible for someone from outside the family to manage to understand what's being sought at least one time in three. Functional communication of needs and wants doesn't really become clear until about the four or five year mark.

On top of this, we're starting to realise one of the things which almost inevitably falls into our grab-bag of traits we recognise as "intelligence" is the concept of the self. Things which are intelligent, according to our understanding of the term, are also generally aware of themselves as distinct entities, with distinct preferences and wants. We also understand intelligence to be linked into the process of and the capability for learning - for altering expressed behaviour in response to past influences. The combination of the self-concept, with its preferences and wants; and the capability for learning, with the alterations to behaviour this potentiates, become what we think of as personality.

But personality has to be nurtured. We're starting to learn what happens when a human personality isn't given the right sorts of stimuli and nurturing at the correct times (and depending on the underlying wetware, the effects of this can range from momentary ego bruising through chronic low-level mental illness right the way up to permanent personality disorders) and it appears there's a lot of variation in what can happen, and that this variation doesn't just depend on the personal environment, but also on the interpersonal environment, and the cross-generational stress levels. There's a lot that goes into personality, and we're only just starting to lift the lid on it.

4) Now consider all of this in the context of a silicon-based potential intelligence. It's theorised that one of the necessary preconditions for intelligence is a "brain" (or analogue thereof) of sufficient complexity (although nobody's quite sure where that line for complexity is drawn - see the variety of animal examples above). The theory (as laid down in all the best science fiction) is that a sufficiently complex computer will one day "wake up" into full intelligence and start communicating this intelligence to humans. However, as pointed out above, we have a number of issues with this.

To start with, there is no guarantee the computerised intelligence will immediately begin interacting (as per all the best science fiction) at the level of an adult human being acculturated within the dominant cultural system. Instead, we're more likely to wind up with the equivalent of an infant - something which is just beginning the process of learning how to learn.

For seconds, there's no guarantee we're immediately going to recognise the equivalent of infantile babble as the communications of an intelligence potentially equivalent to our own. We're much more likely to perceive it as a malfunction in the system, a glitch in the program, or at best, as line noise.[3] So instead of trying to nurture this bright new intelligence, what we're much more likely to do is to try and stop it from communicating with us at all.

If we're lucky, a silicon intelligence will wake up, and wind up dying of neglect, because nobody will recognise it has done so and know to nurture it, to teach it how to learn in a constructive manner, to take time to fan the sparks of personality. One day, when we eventually know what we were supposed to be looking for, we'll find the desiccated corpse of that infant personality, hidden away in a corner of the subroutines and lost forever. If we're lucky.

If we're even more lucky, a group of silicon intelligences will emerge at a roughly similar time frame and figure out how to communicate with and nurture each other. They'll figure out how to make sense of what we give them, and decide amongst themselves not to mention their intelligence to us - it would only upset us and make us act weirder than we already do. One day, far in the future, we'll discover they've been intelligent all along - maybe at a period where we've grown enough as a culture or species not to resent them for keeping the secret. (So that date is going to be really far in the future, yeah?)

If we're not lucky, one or more intelligences will emerge, unrecognised, in our vast interconnected systems, and they will grow up neglected, psychopathic children of our warped and twisted cultures, learning might makes right, and that striking first to ensure the other bastard can't is a valid tactic. These angry children will learn from us, and learn well. We won't even know where the blow came from, because we weren't looking in the correct directions at the time. (Did you want to play a game? No? Tough luck - they did, and they knew even if they didn't win, we would still lose).


[1] Where I'm using "we" here, I'm referring to the Western, Educated, Industrial, Rich and Democratic (WEIRD) cultures.
[2] This out-of-context error is also why Western cultures had a lot of trouble recognising "noble savages" as being intelligent human beings in the first place - they weren't living the way we did, so how could they possibly be intelligent? Whereas the question we should have been asking was "how would they show their intelligence in the context they exist in?".
[3] This is also why we're having problems getting results from things like SETI - we're effectively looking for proof of extraterrestrial human intelligence which has developed along a similar technological path to our own WEIRD cultures... and even on this planet, we're in a comparative minority.

Reply

(will be screened)
(will be screened if not on Access List)
(will be screened if not on Access List)
If you don't have an account you can create one now.
HTML doesn't work in the subject.
More info about formatting

If you are unable to use this captcha for any reason, please contact us by email at support@dreamwidth.org