. /../quot;True".../ 123
written by Bellum on Jan 25, 2007 19:01
If you could develop such a thing, would you?


Do you think it's possible? I've recently decided that the key to true AI (and perhaps a really accurate example of A-life) is hardware that changes itself. I read about it's theoretical development in an article awhile back. It's interesting, though, because the brain changes itself so fluidly. Actually, I think it's an important component to our intelligence.

EDIT: Note that I'm not a biologist or a computer scientist, nor do I pretend to be. This is pure speculation.
the optimist
written by Puck on Jan 25, 2007 19:48
I personally believe that neural networks are the key to 'true' A.I. whether implemented in hardware or software. Of course, if neurons could be simulated (maybe with some sort of silicon switch) a hardware implementation would certainly be more efficient. However, it seems to me that we have a long way to go with hardware development and neural theory (if that what it's called ) before we could develop any sort of truely intelligent A.I.

I don't know a lot about how the human brain theoretically stores, retrieves and interprets information but it seems to me that there are still unexplored connections. For example people who have heart transplants are known to undergo personality changes, so it's possible that the DNA in blood could have an effect on thought proccesses.

But meh, those are my thoughts. I'd better quit before I start rambling.
journeyman
written by Stargazer on Jan 25, 2007 19:55
I imagine that a machine based on biology would indeed be far better suited for artificial intelligence than something based on minerals or other hard materials. We know that intelligence (or at least sentience) can be achieved by use of biology, we are ourselves living proof of that, and I think it is only a matter of time before we are able to replicate the process in a controlled environment.

The question may not be as much "can we", as "should we": If we create a new intelligence on this planet to rival our own, I doubt we would be able to coexist with it for long. I do not think humanity is ready for such tremendous responsibility, and it could quite surely end in catastrophe for at least one of the two species, if not both.

That probably won't stop artificial intelligence from coming to fruition, perhaps as soon as within the next hundred years or so; I think that if it can be done, it will be done whether we like it or not.
written by Barebones on Jan 25, 2007 21:18
Again speculating, I presume we will reach a turning point the moment when the design of an A.I. falls mostly in the hands of machines themselves. We currently use tools to design tools, and it is only natural for this tendency to head towards a closure of this "feedback loop". And by then we better set a good example.
whoosh
written by Buuks on Jan 25, 2007 21:38
There is just one word popping up right now: Borg.

It would be scary when a machine can upgrade his/her/its own hardware.
written by Barebones on Jan 25, 2007 21:54
You're watching too much TV (and so am I).

I should clarify that, at this moment, in the Netherlands a TV channel is airing "Star Trek: First Contact".
whoosh
written by Buuks on Jan 25, 2007 22:02
And I am not even watching that.
"gheeh!" (c)h.azuma
written by Yayo on Jan 26, 2007 00:36
How can we pretend to develope an Artificial Intelligence? I'm still not sure that we can claim to be a Natural one! : P
I mean, we probably have more chances in trying to make an AS (Artificial Stupidity). : P


y.
glyph poet
written by Nalix on Jan 26, 2007 01:09
Amen. But I like to speculate about it too. I think the key lies in environment. Our capacity to do is a big reason why we know how to do. Computers have computational strength but they lack creative strength. Artificial creativity is probably the most important next step. The program needs to be free to mess around. And it needs drives. Motives. Creation cannot occur without motives because it has no reason to.

Though I think that real AI is probably going to be seen in games before anything else. Reason: AI's already have the most important blocks- environment, motive and ability to act. Or at least the potential for those blocks is there, most games don't use it. What they really need to make them at least appear intelligent is a communication engine. Anything the bot can do, it can say and explain. And tell us why.

And I think that in order to start out simple, that should happen in a game environment.
written by Bellum on Jan 26, 2007 01:47
I don't think most game developers are willing to pump the resources required for it's development into AI research, really.

I see AI being developed at a university or some other organization somewhere after years of expensive development in both hardware and software.
written by Stellanaut on Jan 26, 2007 02:05
Game AI is really a misnomer, its just a program.

I beleive AI would depend on the ability for a program to dynamically edit and amend itself, well, intelligently.

As for AI and humans coexisting as things progress, i forsee humans will be the ones freaking out and causing havok, where the AI will be the level headed ones.
written by Bellum on Jan 26, 2007 02:09
Stellanaut said:
I beleive AI would depend on the ability for a program to dynamically edit and amend itself, well, intelligently.
I agree, but I think the key to this is in hardware development, not just software.
hello! :) felysian
written by Hello! :) on Jan 26, 2007 03:36
What I see is this: Everything that makes up anything is a concept. Even a concept is a concept. So until the concept 'concept' can be defined (or a skeleton that is enough to define the concept) and created. AI will not come close to human level. Everything that has been termed 'AI' only faintly reflects just a tiny part of human intelligence.

/me largely speculating.

Anyway, I've applied some thought to this very subject, as I am interested in the AI field of computer science. I always laugh when movies or stories have a plot where AI has enough intelligence to take over the world (along the lines of Terminator, HAL, etc.), because I'm almost certain that it is not possible. I will not ofcourse ignore that tiny bit of uncertainty and say that is impossible.

Oh and as an afterthought, I'll clarify that I'm talking about higher reasoning/thought and the like, not motor controls/physical aspects, because that could be handled by another type of intelligence which I have not thought a lot about.

So what the AI I'm describing would do would be along the lines of: I want to get from here to there, I'll tell my hardware controller to take what I have previously taught it to make individual moves. Kind of like the relationship between the front of the brain and the brain stem. The front of the brain initially teaches the brain stem the motions, then the brain stem can take the patterns of motions previously taught and use it independantly of thought.
*** Please note, it has been about 4 years since I've had biology and I'm not certain I'm correct with "the front of the brain" and "the brain stem," but I do remember that brains do work like how I describe. (I'd look up the proper places/terms, but I'm a bit lazy right now.) ***
written by Bellum on Jan 26, 2007 03:58
Why do you think it's impossible?
hello! :) felysian
written by Hello! :) on Jan 26, 2007 04:41
Well, while technically I don't say it is impossible. To me it seems nearly impossible to define a "concept" and if it is possible, can hardware be made or software be written "conceptualize" things in a timely manner. Not only will it have to be able to "conceptualize" information, but also use those concepts to actually do something useful.

I will point out that I haven't actually spent large amounts of time thinking about a concept, but I have done some thinking. To me, a concept, very simply, is a very deep way of classifying things. That's the best I can do right now, because my brain is starting to fall asleep and I haven't yet been able to get what I think a concept is into words.

At the moment, I think the best way I can describe myself with an example is the dictionary. Each word is defined by a group of words. So at some point you'd need to experience words so that it is possible to actually learn what they mean. So if you see a "red chair," but at the time you didn't know what either of the words were, you'd need to also see a "brown chair" or "red table" to know what a "chair" or "red" is.
That is approximately how I see it, not exact, but the idea is somewhere there.
reading this thread
no members are reading this thread
. /../quot;True".../ 123
40387, 12 queries, 0.080 s.this frame is part of the AnyNowhere network