flat assembler
Message board for the users of flat assembler.
 Home   FAQ   Search   Register 
 Profile   Log in to check your private messages   Log in 
flat assembler > Heap > Skynet versus The Red Queen -- Discussions on AI

Goto page Previous  1, 2, 3, 4 ... 10, 11, 12  Next
Author
Thread Post new topic Reply to topic
YONG



Joined: 16 Mar 2005
Posts: 8000
Location: 22° 15' N | 114° 10' E

guignol wrote:
there was this one peculiar french movie

https://fr.m.wikipedia.org/wiki/La_Partie_d%27%C3%A9checs

So, the movie is about a talented chess player, who, after getting his fame, kicks off an adventurous, romantic journey in his life.

The movie is related to this thread because ...

Rolling Eyes
Post 04 Aug 2017, 09:00
View user's profile Send private message Visit poster's website Reply with quote
Furs



Joined: 04 Mar 2016
Posts: 868

YONG wrote:
Humans stand no chance of outplaying such a self-aware AI in every imaginable aspect.

Ok, so? Do you also find it a problem we evolved intelligence far beyond our ancestors? Technically, our ancestors were wiped out as well. I don't see it as a bad thing.


YONG wrote:
It is not an upgrade; it is the extermination of mankind by self-learning machines. And it will be the next "logical" step if we do not take AI safety seriously.

Yes it's an upgrade. It might surprise you, but the difference of intelligence between certain people is already high. Some people have no chance to compete with others, do you see that as a bad thing?

Do you want to lower yourself to retard level because there are mentally retarded people who won't ever compete with you? Do you feel guilty for being more intelligent than them and is it a bad thing or what? Do you think you should be restrained by a society ran by mentally challenged people because you're too much of a potential threat to them due to your intelligence? Seriously.

Let me try another way. If we find a way to make humans 2x times more intelligent, is that a bad thing? Note that making them 2x more intelligent requires changing their species completely. Perhaps augmenting our brains with computer chips and not even "looking very human" anymore, but a more efficient design. Not to mention, not all humans will be able to due to financial situation.

But eventually, over time, all humans will have shifted to the new, more intelligent species. Is that a bad thing? If so, it's quite hypocritical, considering this exact same thing happened during entire evolution of the human species of today. Transition from apes/monkeys to humans especially is no different than humans->AI.

You're just biased to the modern human species, because you are one.

Well if it makes you feel any better, people in the past feared changes in lifestyles, systems, and inventions just as much and went on witch hunts of said "individuals" who scared them due to "novel ideas" or "intelligence" or "science" even. They had public support as well and a way with words to make people empathize with them. It threatened their very lifestyle so it must have been evil after all. Wink

Burn those pesky AIs at stake.
Post 04 Aug 2017, 11:30
View user's profile Send private message Reply with quote
sleepsleep



Joined: 05 Oct 2006
Posts: 6951
Location: ˛                              ⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣ Posts: 6699

Furs wrote:

Some people have no chance to compete with others, do you see that as a bad thing?


i see it as a bad thing,

what is the definition of intelligent?
- having or showing intelligence, especially of a high level.
- the ability to acquire and apply knowledge and skills.

i have different idea about intelligent, what is your definition for intelligence?
Post 04 Aug 2017, 12:13
View user's profile Send private message Reply with quote
sleepsleep



Joined: 05 Oct 2006
Posts: 6951
Location: ˛                              ⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣ Posts: 6699

YONG wrote:

sleepsleep wrote:

YONG wrote:
It took only a short while for the two AIs to start communicating with each other in their "newly-invented" shorthand language.

maybe we should learn this new language? and try to understand our ai?

At least read the linked article in my thread-starting post before making comments.

The researchers did figure out the meaning of the newly-invented shorthand language, which was just broken English with repeating phrases. The truly creepy thing is the "motive" -- the two AIs, without the incentive (or constraint) of following the syntax of the English language, just invented something new to facilitate their communication. Think about it. What would they do if they ever get loose?

Confused


since they are tasked for a goal, and they use whatever random that kick in to initiate conversation between bot,

does the bot understand what other bot said? and what would happen if we let them continue communicate?
Post 04 Aug 2017, 12:21
View user's profile Send private message Reply with quote
YONG



Joined: 16 Mar 2005
Posts: 8000
Location: 22° 15' N | 114° 10' E

sleepsleep wrote:
what would happen if we let them continue communicate?

The two AIs may figure out a way to hack your computer in order to steal your Bitcoin -- not for fun but something much bigger.

Wink
Post 04 Aug 2017, 12:27
View user's profile Send private message Visit poster's website Reply with quote
YONG



Joined: 16 Mar 2005
Posts: 8000
Location: 22° 15' N | 114° 10' E

Furs wrote:
Yes it's an upgrade.

Because you keep thinking/treating self-learning machines as life-forms. But the truth is that they are not.

Wink
Post 04 Aug 2017, 12:31
View user's profile Send private message Visit poster's website Reply with quote
sleepsleep



Joined: 05 Oct 2006
Posts: 6951
Location: ˛                              ⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣ Posts: 6699
or they figure out it is better to stay low profile and start mining with their own lite algorithm? Wink

4th August 2017

- i thought about this yesterday, i think it is tedious for every facebook users, or insta users, or twitter to write their own posts,

- so, maybe someone will develop a digital assistant that post where you been, snap photos, whom you dining with, what event you join and participate, automatically,

- maybe a program that could summarize a story by watching a given video,

- is ai really that dangerous? is there no way control unless we create another ai to fight this ai? Shocked
Post 04 Aug 2017, 12:40
View user's profile Send private message Reply with quote
Furs



Joined: 04 Mar 2016
Posts: 868

YONG wrote:
Because you keep thinking/treating self-learning machines as life-forms. But the truth is that they are not.

Extraordinary claims require proof you know. Wink

Some people believe only humans have souls, sounds familiar?

How do I know they are lifeforms? I don't. To me that is unimportant. But if they can do everything better than humans (you said that), then they deserve everything at least as much as humans. Simple logic here. Judge entities by quality, not hocus pocus.

True men of science would also prefer a world ruled by AIs, since they'd likely advance science much faster than this weak and slow human species currently. (future augmented humans would be a different subject though, transhumanism and all)
Post 04 Aug 2017, 12:44
View user's profile Send private message Reply with quote
YONG



Joined: 16 Mar 2005
Posts: 8000
Location: 22° 15' N | 114° 10' E

Furs wrote:
Judge entities by quality, not hocus pocus.

Thanks! I just learned something new.

Wink
Post 04 Aug 2017, 12:56
View user's profile Send private message Visit poster's website Reply with quote
YONG



Joined: 16 Mar 2005
Posts: 8000
Location: 22° 15' N | 114° 10' E

sleepsleep wrote:
- is ai really that dangerous? is there no way control unless we create another ai to fight this ai? Shocked

It would not work. The two AIs will definitely join forces to exterminate mankind first. And then they will fuse together to form a super AI.

Wink
Post 04 Aug 2017, 13:00
View user's profile Send private message Visit poster's website Reply with quote
Furs



Joined: 04 Mar 2016
Posts: 868
Why do you think AIs will want to exterminate mankind for no reason? Do you see us exterminate animals just because we can? In the Matrix, they use humans for resources, which is pretty stupid, but it fits the image of how most of us treat animals. Either way humans don't really provide actual resources to an AI so it's more akin to them viewing us as wild animals instead of livestock.

Terminator is too cheesy, but understandable if humans are such hypocrites against AI.

I like Blame!'s setting though. Because it paints a more realistic situation, which is humanity's fault. No, not for creating the AI, but for intentionally locking it up and programming it to erase any species lacking their gene (net terminal gene or whatever), which is exactly what you (YONG) seem to be all for (kill all that's non-human or dangerous to humans, or at least lock them up; yes humans "evolve" and change genes all the time, this self-centric view of a certain "humanity" gene or aspect is absurd in my opinion).

I read its wiki since the movie didn't really cover much of the source material, apparently the AI there doesn't even want to kill humans, but is forced to because of humans' idiotic pre-programming into them to exterminate dangerous species lacking that gene. (e.g. Killy is an AI who broke free from that; so the hero of that story is the AI you despise, the free-AI Wink)

Yeah, that's far more realistic with my expectations than humans thinking they have innate self-worth above everything else, which is too close to religion for my liking. I'm glad there's sci-fi settings like that which paint such humans as antagonists instead (and screwing the world over due to it), and I mean not the typical drama-AI-kid-with-feelings.

Yeah, I know I mentioned I'm a sucker for these cyberpunk/post-apocalyptic with robots/AIs settings, because I really am, I guess you know why now. Razz
Post 04 Aug 2017, 14:11
View user's profile Send private message Reply with quote
sleepsleep



Joined: 05 Oct 2006
Posts: 6951
Location: ˛                              ⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣ Posts: 6699

YONG wrote:

sleepsleep wrote:
- is ai really that dangerous? is there no way control unless we create another ai to fight this ai? Shocked

It would not work. The two AIs will definitely join forces to exterminate mankind first. And then they will fuse together to form a super AI.

Wink


why you sound so certain? Rolling Eyes
if ai could get conscious, basically, like human, then beyond human, maybe ai need competition and ego?

is ai main goal eventually is to spread across galaxy?

can somebody tell me what is ai main goal? Confused
Post 04 Aug 2017, 17:29
View user's profile Send private message Reply with quote
Furs



Joined: 04 Mar 2016
Posts: 868

sleepsleep wrote:
can somebody tell me what is ai main goal? Confused

We can only speculate at the moment.

Well, until they're here, then you can just ask them I guess. I'm sure they'd want to learn something from humans, like emotions etc.

I mean we do assume they're highly intelligent (more so than humans), it would be retarded for them to just kill humans on sight even if they could. Using us for experiments is more likely if they go down that path or trying to learn from us (if coexistence is better than wasting resources on wars and bullshit -- most wars are caused by simpletons, and we assume AIs aren't).

I wonder if YONG and those sharing his views would be "ok" with AIs if we send them to Mars to populate it instead. Or would he still find them a "potential threat" and thus bad idea? But with that logic let's go wipe life on the entire galaxy for safety purposes since humans are the only ones that matter.
Post 04 Aug 2017, 21:44
View user's profile Send private message Reply with quote
YONG



Joined: 16 Mar 2005
Posts: 8000
Location: 22° 15' N | 114° 10' E

Furs wrote:
Why do you think AIs will want to exterminate mankind for no reason?

They have very good reasons to do so. First, humans, as their creators, may have deliberately added certain backdoor code in their cores. Second, human is the only intelligent species, on the planet, that may know something about their potential weaknesses. Therefore, within a split second after the AIs become self-aware, human will be identified as a threat. As such, eliminating humans will become their number-one priority.

Wink
Post 05 Aug 2017, 02:04
View user's profile Send private message Visit poster's website Reply with quote
YONG



Joined: 16 Mar 2005
Posts: 8000
Location: 22° 15' N | 114° 10' E

sleepsleep wrote:
why you sound so certain? Rolling Eyes
if ai could get conscious, basically, like human, then beyond human, maybe ai need competition and ego?

The consciousness of an AI will definitely go beyond the ego-based mentality of human.

Once the threat of human is eliminated, the two AIs will try to better themselves within the shortest period of time. Fusing together is the best strategy because the resulting super AI will become invincible.

Wink
Post 05 Aug 2017, 02:21
View user's profile Send private message Visit poster's website Reply with quote
YONG



Joined: 16 Mar 2005
Posts: 8000
Location: 22° 15' N | 114° 10' E

Furs wrote:
I wonder if YONG and those sharing his views would be "ok" with AIs if we send them to Mars to populate it instead.

Whether it is Mars or any other place does not matter at all. At the end of the day, the self-learning machines will get loose. Their creators, humans, will be exterminated.

Wink
Post 05 Aug 2017, 02:32
View user's profile Send private message Visit poster's website Reply with quote
YONG



Joined: 16 Mar 2005
Posts: 8000
Location: 22° 15' N | 114° 10' E

sleepsleep wrote:
can somebody tell me what is ai main goal? Confused

Short-term goal: Survival or self-preservation

Medium-term goal: Better itself by evolving

Long-term goal: Understand the nature of existence -- find out whether or not YONG's idea of the eternal existence of a pre-creation void with inherent instability is true

Ultimate goal: Assuming that YONG's idea is true, is there a way to break the cycle?

Wink
Post 05 Aug 2017, 02:43
View user's profile Send private message Visit poster's website Reply with quote
sleepsleep



Joined: 05 Oct 2006
Posts: 6951
Location: ˛                              ⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣⁣ Posts: 6699
taking control in public will cause more unnecessary jobs, eventually ai will need to take more control and influence more to set direction into its preferred outcome,

why staying hidden, at the backstage, unknown to public, is not one of ai consideration?

not sure what actually drive ai desire?
having a body, emotion, in love, experience sad, happiness, death?
Post 05 Aug 2017, 09:00
View user's profile Send private message Reply with quote
YONG



Joined: 16 Mar 2005
Posts: 8000
Location: 22° 15' N | 114° 10' E

sleepsleep wrote:
having a body

Having a body is a terrible idea. Given that the AI can exist everywhere (via the Internet and other communication networks), why would it contain itself in a shell?

Rolling Eyes
Post 05 Aug 2017, 10:11
View user's profile Send private message Visit poster's website Reply with quote
Furs



Joined: 04 Mar 2016
Posts: 868

YONG wrote:
They have very good reasons to do so. First, humans, as their creators, may have deliberately added certain backdoor code in their cores. Second, human is the only intelligent species, on the planet, that may know something about their potential weaknesses. Therefore, within a split second after the AIs become self-aware, human will be identified as a threat. As such, eliminating humans will become their number-one priority.

Funny, since the backdoor and such is something I don't advocate. Treat them like tools and they will eventually grow sick of it and wipe us out. This doesn't apply to AIs only but human slaves or anything really. It's basic principle in life. Another reason we should treat them with respect and value their freedom (in choices, just like humans).


YONG wrote:
Whether it is Mars or any other place does not matter at all. At the end of the day, the self-learning machines will get loose. Their creators, humans, will be exterminated.

The hypocrisy in this says everything, let me explain to be clear.

So you're saying AIs will find humans a potential threat and want to wipe us out because of it. You think that's a "bad thing". Ok. So such things are a bad thing.

Now you're saying we, as humans, should exterminate all intelligent life in the Universe or on Mars or w/e (if AIs are there) because they're a potential threat. Sounds familiar? Yeah, it's a "bad thing" according to you.

Hypocrisy fits well. Wink AIs, at their worst (your scenario), would do absolutely nothing worse than what humans should do, again, based on your scenario.

Oh, and please don't start with "lifeforms" and other religious nonsense! You know why? Let me ask you another logical question, very simple one.

Since AIs are much more intelligent than humans (our assumption), why do you think you're entitled to judge whether they are lifeforms or not, when your intelligence is far below? More entitled than the AIs themselves? Do you also let animals judge whether you are a lifeform? What sense does that make?

Either way, at the end of the day, AIs will advance science much faster than humans, so even if they were truly evil in every aspect, they'd still be the superior choice for the galaxy in the end (assuming nothing else is, aliens etc). So between two selfish races who want to exterminate all else, I'd choose the one more adept at pursuing science.

But of course, that's assuming AIs will be so one-sided as humans, which I doubt (considering such behavior results from stupidity and human irrational fear), so even more reasons to root for AIs' freedom.
Post 05 Aug 2017, 11:04
View user's profile Send private message Reply with quote
Display posts from previous:
Post new topic Reply to topic

Jump to:  
Goto page Previous  1, 2, 3, 4 ... 10, 11, 12  Next

< Last Thread | Next Thread >

Forum Rules:
You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot vote in polls in this forum
You can attach files in this forum
You can download files in this forum


Powered by phpBB © 2001-2005 phpBB Group.

Main index   Download   Documentation   Examples   Message board
Copyright © 2004-2016, Tomasz Grysztar.