flat assembler
Message board for the users of flat assembler.
 Home   FAQ   Search   Register 
 Profile   Log in to check your private messages   Log in 
flat assembler > Heap > Skynet versus The Red Queen -- Discussions on AI

Goto page Previous  1, 2, 3, 4
Author
Thread Post new topic Reply to topic
YONG



Joined: 16 Mar 2005
Posts: 7637
Location: 22° 15' N | 114° 10' E

Furs wrote:
... we should treat them with respect and value their freedom (in choices, just like humans).

You are absolutely right. We should treat all NON-LIVING things with respect. We should be considerate and concerned about their feelings. I am going to talk to my ballpoint pen and see whether or not it is feeling well -- you know, I may be squeezing it too hard. I am going to discuss with my laptop and see whether or not it feels being enslaved -- if so, I will see how I can set it free.

Seriously, could you please return to reality?

Wink
Post 05 Aug 2017, 12:41
View user's profile Send private message Visit poster's website Reply with quote
Furs



Joined: 04 Mar 2016
Posts: 559
Yeah, I'm sure someone like you of much lower intelligence than the supposedly "NON-LIVING" machine you speak of (that is so intelligent it's very dangerous etc) gets to decide better whether it is living or not. Makes sense. Like I said, that's why we let animals decide for us whether we are living or not, because the lower the intelligence is, the better a judge the organism is.

We are not special, and we are not special judges of this Galaxy or Universe, even though we like to think of us that way. We can, however, prove we aren't as self-centered and short-sighted like almost all other species we know of (on this planet), which might earn us respect of the AI who'll be interested to learn about it. I mean think, if we, as inferior beings in terms of intelligence, can show understanding instead of fear to those superior to us (unlike animals, which can't), what makes you think those more intelligent than us will regress back to "animal mentality" and not try to learn from it? You think intelligence is all about the best way to fight and kill? Of course this does assume we don't treat them as objects/tools cause then they might resent us and think to themselves "they aren't any different than the animals they treat as livestock" and have nothing to learn from us.

Unfortunately, your pen right now is not intelligent enough to even answer a question you give it (without being pre-programmed the answer), so can't really work as a judge for its own life (i.e. not self-aware). Wink


You know, it helps if you watch movies with such settings (even sci-fi is fine), where humans will almost always be the antagonists, unless it's about a "hive-mind alien invasion" or whatever. Even the classic Starcraft game shows how the United Earth Directorate are a bunch of nazi-speciesism pieces of shit, to put it lightly. (of course there's more factions of humans, some are actually shockingly good and understanding of species with higher intelligence)
Post 05 Aug 2017, 14:04
View user's profile Send private message Reply with quote
YONG



Joined: 16 Mar 2005
Posts: 7637
Location: 22° 15' N | 114° 10' E

Furs wrote:
Yeah, I'm sure someone like you of much lower intelligence than the supposedly "NON-LIVING" machine you speak of (that is so intelligent it's very dangerous etc) gets to decide better whether it is living or not. Makes sense. Like I said, that's why we let animals decide for us whether we are living or not, because the lower the intelligence is, the better a judge the organism is.

First, I decide whether or not something is living based on the generally-accepted principles developed by scientists. So, my decision has nothing to do with my intelligence.

Second, your argument actually suggests that AI can be very dangerous because it makes value judgement -- it decides whether something is living or non-living, and it even decides whether something deserves to live or not.

Wink
Post 06 Aug 2017, 02:10
View user's profile Send private message Visit poster's website Reply with quote
YONG



Joined: 16 Mar 2005
Posts: 7637
Location: 22° 15' N | 114° 10' E

Furs wrote:
We are not special ...

Exactly! Please bear that in mind and visit sleepsleep's thread. I have a question for you there.

Wink
Post 06 Aug 2017, 02:14
View user's profile Send private message Visit poster's website Reply with quote
YONG



Joined: 16 Mar 2005
Posts: 7637
Location: 22° 15' N | 114° 10' E

Furs wrote:
You know, it helps if you watch movies with such settings (even sci-fi is fine), where humans will almost always be the antagonists, unless it's about a "hive-mind alien invasion" or whatever.

You know, it helps if you watch the following clip about the Justice League:

Batman is most badass hero of all time
https://www.youtube.com/watch?v=7KmoaTjEbKM

Even a cartoon character (actually, the writer) knows that we need a contingency plan against the most powerful -- and potentially dangerous -- individuals in the world.

We really need to take AI safety seriously, especially when people are building artificial brains and uploading minds to cloud:

https://www.youtube.com/watch?v=amwyBmWxESA

Skynet is emerging ...

Confused
Post 06 Aug 2017, 06:30
View user's profile Send private message Visit poster's website Reply with quote
Furs



Joined: 04 Mar 2016
Posts: 559

YONG wrote:
First, I decide whether or not something is living based on the generally-accepted principles developed by scientists. So, my decision has nothing to do with my intelligence.

Second, your argument actually suggests that AI can be very dangerous because it makes value judgement -- it decides whether something is living or non-living, and it even decides whether something deserves to live or not.

Well of course I was referring to humanity in general with that part... Scientists made this decision because of their intelligence -- an animal cannot do it. So naturally, a robot scientist with greater intelligence can make better decisions.

As for the second point: humans are doing the exact same thing -- why don't you label them as dangerous then? Let's wipe/lock them out. Refer to Speciesism

Labeling something as living based on hard science is perfectly fine -- which is why a robot can make perfectly fine value judgement, there's no bias.

If humans are afraid of robot's judgement maybe it's because their own judgement is not science and objective but rather some religious bullshit where humans are special/supreme/God's creation or whatever nonsense.


YONG wrote:
Even a cartoon character (actually, the writer) knows that we need a contingency plan against the most powerful -- and potentially dangerous -- individuals in the world.

We really need to take AI safety seriously, especially when people are building artificial brains and uploading minds to cloud:

https://www.youtube.com/watch?v=amwyBmWxESA

Skynet is emerging ...

Confused

I never said you shouldn't be "prepared" for it. Locking someone up just because he is potentially dangerous is not being prepared, it's being paranoid and it's this exact attitude that will grow resentment to the human race/specific ethnicity (from anything really). I mean humans even do it amongst themselves (racism, etc), for crying out loud.

I mean, treating self-aware AIs (who can make decisions) with respect and valuing them like humans (if not more) is what I advocate. You do treat (some) humans with respect right? That doesn't mean you give them access to your bank account or you give them control of nukes or w/e; AI wouldn't be any different.

However, the fact that if you send them off to Mars and you still find it a problem is what is really appalling to me -- it's no different than racism. I know you'll say, well racism deals with humans, not other species/AIs. Remember one thing: racist people don't place value in say, blacks. Their response is the same as yours: "there is no problem, black people are just subhuman and potentially dangerous, they don't deserve to exist and be a threat to the normal humans". Just as you think your position is more "moral" or "right" because you think "human supremacism" is right, so do they think "white supremacism" is right. From a scientific point of view, both suck.


I think it's past the point of whether a man of science like yourself (right?) can truly embrace some facts or not. Humans aren't innately special, stop treating them as such and leave that shit to religion. Assuming all things equal (same behavior etc) from AIs, then I ask you again what exactly makes humans any different in that situation?

There's a possibility that AIs will be just as bad as humans -- i.e. wage wars and kill those they deem "subhuman" out of fear. They'd still be no worse than humans though, just equal bastards.

But hey, at least I'll know there were humans like me who tried to show them not all humans are pieces of shit like them. Perhaps this will spark some AIs who will also fight against the big bads (just like humans do fight against racism etc).

Just because we lose the war (because the self-centric humans end up deciding for entire human race and then we end up AIs resenting us) doesn't mean all humans think the same. Humans aren't a hive mind, and probably AI won't either. (hive mind is for drones, not for self-intelligent entities).

It is foolish to think AIs, while being "more intelligent" than humans, revert back to basic animal instincts and hive-mentality.
Post 06 Aug 2017, 11:19
View user's profile Send private message Reply with quote
YONG



Joined: 16 Mar 2005
Posts: 7637
Location: 22° 15' N | 114° 10' E

Furs wrote:
humans are doing the exact same thing -- why don't you label them as dangerous then? Let's wipe/lock them out.

First, this thread is about AI, particularly AI safety. What you are arguing is actually off-topic.

Second, all your related arguments, including the racism accusation, are based on one "twisted" assumption: You are assuming that we can -- or should -- treat self-learning machines as life-forms, which gives the machines certain rights that we should respect. Unfortunately, that is NOT a generally-accepted assumption.

Wink
Post 06 Aug 2017, 12:19
View user's profile Send private message Visit poster's website Reply with quote
YONG



Joined: 16 Mar 2005
Posts: 7637
Location: 22° 15' N | 114° 10' E

Furs wrote:
I think it's past the point of whether a man of science like yourself (right?) can truly embrace some facts or not.

Yes, I am a man of science. However, most of the "facts" that you are talking about are nothing more than your rather extreme opinions (that are based on rather twisted assumptions). Given that this message board still supports the freedom of speech, I respect your right of expression. Period.

Wink
Post 06 Aug 2017, 12:28
View user's profile Send private message Visit poster's website Reply with quote
Furs



Joined: 04 Mar 2016
Posts: 559
Couldn't care less what is "accepted" or not. Show me math that proves humans are life forms and AI isn't. Social stigma or acceptance is religion, by its very definition. Religion of the masses. It was once accepted the Sun revolves around the Earth and people who claimed otherwise were hunted down. Very scientific.

There is no extreme opinion from me. I mean, "live and let live" is not extreme. Plus, we can't know the AIs opinion unless we let it have one and express it, which you don't want to, so you're the one with an extreme view trying to stop other potential entities from having one (believing you're right, like extremists do, obviously).

There's no twisted assumption. I didn't assume anything that's why I want to let the AI say it. Twisted is when you are fearful to allow it to happen because you know it will shatter your beliefs (yes, beliefs) that the humans are innately special or the center of the world.

I'm sure if an AI looked like a human (body), acted like a human (mind), you'd still not treat it as a human, because obviously humans have something similar to "souls" or are special in some way. You wouldn't use the terms souls, clearly, because that would place you with religious wackos; remember one thing: every religious extremist/zealot thinks he is right and won't let others not sharing his viewpoint express themselves. (in this case, I'm talking about AIs btw, not me, since I'm a human)

Well just as you fear AIs will be "dangerous" and wipe out current human lifestyles, so did religious fanatics (templars etc) in the past. And they were right in a way -- their lifestyles and human beliefs changed since then and were wiped out, totally. Too bad, it's a good thing in the grand scheme. So they had a right to fear it. However, it doesn't mean it's a bad thing, on the contrary.
Post 06 Aug 2017, 13:02
View user's profile Send private message Reply with quote
YONG



Joined: 16 Mar 2005
Posts: 7637
Location: 22° 15' N | 114° 10' E

Furs wrote:
Couldn't care less what is "accepted" or not.

See, that is extreme.

You don't care about the generally-accepted definition of what is living and what is not. You don't care about the generally-accepted principle that machines do not have rights.

You just come up with your own "bizarre" idea that human should respect machines. You just come up with your own "brutal" notion that if AI decides to exterminate human, it must be the next "logical" step and so be it.

Sigh!

Confused
Post 06 Aug 2017, 13:31
View user's profile Send private message Visit poster's website Reply with quote
Furs



Joined: 04 Mar 2016
Posts: 559
Actually, no, that was just speaking in general. For example (on wiki):

Wikipedia wrote:
The definition of life is controversial.

Clearly, not hard science.

Besides, it doesn't take much to prove to you why using such definition for respect or having "rights to live" is wrong (in the sense that even people who share your definition disagree): we don't respect animals or bacteria or viruses the same as humans, in fact for the latter we totally want to exterminate some dangerous ones because they are, well, a threat to us. Plus this implies an "uploaded human brain" in the future to a robot body would deserve no rights which is unacceptable! Or an augmented human with robot body parts and such.

You know the funny thing, this exact topic is in many cyberpunk games, most recent I played Deus Ex Mankind Divided (even the name says it all) -- exactly where "normal humans" think they are "above" augmented humans and treat them like dogs, blame all terrorism on them, etc. Needless to say, that's a bad thing in the game Wink (yes, I'm fully used to such "human purists" in media and I really hate their guts Razz) (they are also jealous, not just on them being stronger, but also smarter, since you can augment the brain -- so they find them a "threat")

My definition doesn't fail or have more "special cases totally fit to suit humans". Since I treat entities/organisms based on qualities -- viruses definitely aren't intelligent or probably even self-aware. Animals are self-aware but not as intelligent as humans, thus they get less rights. If AIs can surpass humans in such qualities then they will obviously deserve at least as much as a human in terms of rights, according to my definition.

There are no special cases or exceptions, so it's a more scientific definition, because it's quantifiable and not "manipulated data to fit an agenda". Heck, it even works for aliens!

There's also the possibility AI won't exhibit such behavior and I'm fully ok with it too. (in which case they will deserve less or no rights at all)
Post 06 Aug 2017, 14:16
View user's profile Send private message Reply with quote
YONG



Joined: 16 Mar 2005
Posts: 7637
Location: 22° 15' N | 114° 10' E
Here are a couple of relevant links:

Elon Musk and Mark Zuckerberg Spar Over How Dangerous AI Really Is
http://bigthink.com/robby-berman/elon-musks-fears-of-ai-arent-shared-by-all-ai-experts

Stephen Hawking - will AI kill or save humankind?
http://www.bbc.com/news/technology-37713629


Discussions on AI safety somehow remind me of the following "recent" nuclear disaster:

Fukushima Daiichi nuclear disaster
https://en.wikipedia.org/wiki/Fukushima_Daiichi_nuclear_disaster

Even a triple fail-safe system could fail.

Confused
Post 13 Aug 2017, 08:52
View user's profile Send private message Visit poster's website Reply with quote
Furs



Joined: 04 Mar 2016
Posts: 559
Thanks for the links, more reason to hate that hypocrite Musk. Razz

"Machines have been taking over people's jobs for decades and I didn't give a shit because it didn't affect me and their jobs were for peasants anyway. But now they're about to take my job, clearly it should be made illegal. Unacceptable."

Anyway he can babble all he wants. Eventually, consumer hardware will be powerful enough that people will be able to use AIs individually -- or at least, using a network of thousands of computers (like BitTorrent) but for a common AI.

Time to destroy more human rights to disallow us what kind of personal & open-source software we can run too. Disgusting. I'm rooting for Zuckerberg.
Post 13 Aug 2017, 16:30
View user's profile Send private message Reply with quote
YONG



Joined: 16 Mar 2005
Posts: 7637
Location: 22° 15' N | 114° 10' E

Furs wrote:
or at least, using a network of thousands of computers (like BitTorrent) but for a common AI.

That sounds like the prototype of Skynet. Right?

Maybe you are right. Humans never learn from their mistakes; humans, powered by their arrogance, always believe that they can wrap fire with paper. It is just the next logical step for something far more superior to take over and become the new ruling "species" of the lonely planet.

Confused
Post 14 Aug 2017, 02:52
View user's profile Send private message Visit poster's website Reply with quote
Furs



Joined: 04 Mar 2016
Posts: 559
Well not quite like Skynet. Skynet was a large military network AFAIK (but I'm not too versed into Terminator lore), I'm talking about simple casual networks assuming the others are outlawed as Musk wants to.

Obviously, if the others aren't outlawed, then we won't see AI from small fish like casual users on their PCs. I was merely talking of a society in case Musk "wins" -- he'll have to monitor people's PCs/internet 24/7 to be able to kill off self-aware AI.
Post 14 Aug 2017, 11:09
View user's profile Send private message Reply with quote
Display posts from previous:
Post new topic Reply to topic

Jump to:  
Goto page Previous  1, 2, 3, 4

< Last Thread | Next Thread >

Forum Rules:
You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot vote in polls in this forum
You can attach files in this forum
You can download files in this forum


Powered by phpBB © 2001-2005 phpBB Group.

Main index   Download   Documentation   Examples   Message board
Copyright © 2004-2016, Tomasz Grysztar.