Artificial General Intelligence
The aim of this article is to clarify the very notion of Artificial General intelligence to myself, this I guess might hold the pillars about the subject in my mind, because I have spent the last couple of months reading about this but I ain’t getting any grounded kinds of stuff due to ambiguities of interpretation of some words among philosophers, cognitive scientist and computer scientist.
First of all, I will write what I think about this, which is more or less immature thinking, whenever I think of artificial general intelligence I think of Ultron(Yeah, my favorite comic supervillain who can think more than Tony Stark and Gods( like Thor) which can think, understand, plan, imagine, make rational choice etc. , but I am quite unsatisfied with the basis of judgment related to intelligence because the criteria for judgment is not exact at all because there are not any way for a non-inferential test to check whether it really is, like the test for avoiding solipsism.
Although the concepts and thoughts about it are quite obscure, but logically it seems there are not logical reason which avoids us to think of intelligence in a naive way.The last statement was comment om how to distinguish the applied AI and Real AI, as one can’t know whether the system really understand to domain contents, like Google assistant can actually recognize my photos, distinguish photos of animals and my friends, but how the hell I know in that particular domain whether it really understand (How to understand a photo?I just figured out that the distinction between understanding and not understanding and that is when you understand you know that those geometric structures do some cognitive mapping and search out and express in your experience database like this -_- , but when you don’t understand that stuff you just represent the label of some mathematical patterns which seems to understand and thus display the intelligence behaviour) all these inabilities in distinguishing arise due to first-person ontology explanation because we are prone to give explanation to that thing out of our mechanism of “understanding’ domain.
One of most influential test or argument in my experience of my perception is famous Chinese room argument by Professor John Searle which is give argument that same input and outputs given my machine as human can’t have mind, in this argument emphasis is on Non-inferential test(For the sake of your convenience this kinda test is like, suppose you are somewhere in area full of thieves and you spending a night there, (and You are previously told that they often come in night) and suddenly you started feeling sound of jumping on your roof and some murmuring, I hope that you know by your experience that sometimes it might happen in your life where you enter some illusions, but you are enough detached to think that you should check whether there really is someone on your roof, I hope that is enough clear )
All I wanna say is Applied AI gives us very powerful tool for intellectual task like the I am using Grammarly for chrome, which helps to write articles, emails, message and some other thing on internet by improving the words use and some punctuation marks or something like theorem prover, but the Real AI is itself a mind, I would recommend you to think it deeply(by the way I was going to type to not think deeply). like Deep networks on the mathematics of linear algebra and some kind of adjustment of output by some metaphorical named-rewards(Talking about reinforcement learning), but you see your mind it is different.
But after all, I hope I explained this thing properly but I am unsatisfied because I haven’t written the working intelligence of Intelligence, which needs to qualify for some criteria which I have read from the second chapter of Ben Goertzel Artifical General Intelligence(2007) which is Logic of Intelligence by Pie Wang, and they are:
- Similarity(At least should agree on some things which are common characteristics)
- Exactness(It should be exact thus not contain things like Understanding, intuition, intentionality, mind, cognition)
- Fruitfulness(Like it should at least guide the research based on it like what the assumption based on it)
- Simplicity(In mechanism not in explanation)
And the definition given by him is,
Intelligence is the capacity of a system to adapt to its environment
while operating with insufficient knowledge and resources.
I think this is quite inadequate in my opinion but it is not wrong, I guess it is
Yeah, distinuishedSomething like this as belief networks emerging from some grounded belief so grounded that we use to distinguished the types of belief so it is hard to extract them, for example of network of belief system I know that Whenever I type letter I it would come on computer screen as I, I know that I know this stuff, I know this article is saved as draft every second so during closing this article accidentally the idea that this article is already saved play a role in evolving or generation of the idea.
Intelligence is the ability of internal mapping, manipulation and make emerge the internal structure of any arbitrary content thus applying tranformation of its pre-established beleif networks.
I would try to generalize the definition in future and let me tell you what I think is missing from these days definition that is concept of something like self, which is I guess core of the hard problem of consciousness put by David Chalmers, and Yeah I would try to write about some other philosophy of mind and approaches on AGI.
And Yes! MACHINES CAN THINK!
Thanks!
Comments
Post a Comment