flat assembler
Message board for the users of flat assembler.
![]() Goto page 1, 2, 3 ... 10, 11, 12 Next |
Author |
|
YONG
Refer to:
No, Facebook Did Not Panic and Shut Down an AI Program That Was Getting Dangerously Smart http://gizmodo.com/no-facebook-did-not-panic-and-shut-down-an-ai-program-1797414922 While the "official" story may not sound so creepy, it is indisputable that machines with self-learning capabilities are truly dangerous. It is not going to take long before a self-learning machine "accidentally" gets loose. ![]() |
|||
![]() |
|
revolution
YONG wrote: ... it is indisputable that machines with self-learning capabilities are truly dangerous. |
|||
![]() |
|
YONG
revolution wrote: This claim requires proof. ![]() A Wise Forum Member wrote: Oh ye have little faith in ... https://board.flatassembler.net/topic.php?p=141415#141415 ![]() Seriously, watch the following series on AI safety: AI Safety https://www.youtube.com/watch?v=IB1OvoCNnWY General AI Won't Want You To Fix its Code https://www.youtube.com/watch?v=4l7Is6vOAOA AI "Stop Button" Problem https://www.youtube.com/watch?v=3TYT1QfdfsM&t=777s Concrete Problems in AI Safety https://www.youtube.com/watch?v=AjyM-f8rDpg&t=313s Stop Button Solution? https://www.youtube.com/watch?v=9nktr1MgS-A Building a self-learning machine is like playing with fire! ![]() Last edited by YONG on 03 Aug 2017, 13:17; edited 1 time in total |
|||
![]() |
|
YONG
Furs wrote: Apparently AI putting up a resistance and getting sick of it is somehow less moral One day, when a self-learning machine actually gets loose and brings catastrophic destruction to mankind, a surviving human will ask: "How come no-one had ever listened to YONG?" ![]() |
|||
![]() |
|
Furs
Calling it "dangerous" is not computer science, it's also political science and an opinion. Well, I'm sure many people listen to you/agree with you but like you said it takes just 1 to get loose and that's it.
I guess slave owners also said the end of the world is happening (for them) when slavery was made illegal. Doesn't mean it was a bad thing just because their self-centric lifestyle was destroyed. |
|||
![]() |
|
sleepsleep
so nobody define ai,
![]() Quote: the theory and development of computer systems able to perform tasks that normally require human intelligence, such as visual perception, speech recognition, decision-making, and translation between languages. i would define ai as, a conscious who is having the intelligence of 100 million smartest human on earth, a conscious like this is not really far away from our concept of god, ![]() our newly ai god looks more real, dangerous, powerful compare to our ancient famous god(s), so we need a back-up plan, how to deal with our created new ai god(s), is everything and anything dangerous? i guess it is, dangerous starts at the moment we can't control it (nothing to everything), and surely, beside ai, there been already lots of stuffs invented, created, that we hardly control them anymore, ![]() probably, large number of human can't control themselves too, ~ ai might want to kill us all, but history showed, men already kill so much men in our initiated wars, ![]() |
|||
![]() |
|
Furs
sleepsleep wrote: i would define ai as, a conscious who is having the intelligence of 100 million smartest human on earth, If a human was born which has that kind of intelligence, would you enslave him out of fear? Imagine that this born human could hack into any security due to AI-level intelligence and such. Would you enslave him and find it perfectly moral? Really? Because he's potentially "dangerous"? ![]() I'm not saying to let AI reign free, since we don't even let humans reign free (we have laws and such). But enslaving them just because they have the potential to destroy us (hint: humans have the potential to destroy us also) is proof for me why humanity deserves it. Humans always fear those above them because they think they're the only ones who should be allowed to judge morality, even if a being is more intelligent than them. They don't want to be bossed around by someone more qualified to judge because they are afraid they aren't as innocent as they think they are. How pathetic is that? sleepsleep wrote: ai might want to kill us all, but history showed, men already kill so much men in our initiated wars, |
|||
![]() |
|
YONG
Furs wrote: Calling it "dangerous" is not computer science Suppose there is a computer virus spreading through the Internet at an alarming rate. Researchers soon discover that the virus is not just spreading but also mutating, making it virtually untraceable. So, a computer expert comments that the virus is extremely dangerous. You come along and say, "Calling it dangerous is not computer science!" Does that make sense? ![]() |
|||
![]() |
|
YONG
sleepsleep wrote: history showed, men already kill so much men in our initiated wars, ![]() |
|||
![]() |
|
YONG
Furs wrote: But enslaving them just because they have the potential to destroy us (hint: humans have the potential to destroy us also) is proof for me why humanity deserves it. At times, your arguments sound truly "twisted" to me. ![]() |
|||
![]() |
|
sleepsleep
Furs wrote:
human have laws, and we have (probably) a semi-failed judgement system to cater those who broke the laws, and there are lots of illegal activities, which still in operation thanks to loop holes inside laws, etc, but how is it a judgement system for ai? who is the judge? and what kind of punishment? and for sure if the ai is conscious, it would found ways to succumb those laws, and what kind of plan we have when dealing with extra smart, genious conscious or god? ![]() Furs wrote: Humans always fear those above them because they think they're the only ones who should be allowed to judge morality, even if a being is more intelligent than them. They don't want to be bossed around by someone more qualified to judge because they are afraid they aren't as innocent as they think they are. How pathetic is that? very true, ![]() YONG wrote:
ai grow, and develop itself into god, and we afraid ai would kill us all (unknown track), but history showed (proven track), men already killed so much men, and it should be men that we should afraid instead of ai, ![]() ![]() |
|||
![]() |
|
sleepsleep
YONG wrote:
does computers feel unreluctant to be used by us? if yes, then we are enslaving them, if no, then we are not enslaving them, |
|||
![]() |
|
YONG
sleepsleep wrote: ai grow, and develop itself into god, and we afraid ai would kill us all (unknown track), ![]() If so, we should just forget about AI safety. Let the self-learning machines do whatever they want. One day, when the machines bring catastrophic destruction to mankind, a dying human will ask: "How come everyone just followed sleepsleep's stupid advice?" ![]() |
|||
![]() |
|
sleepsleep
sleepsleep wrote:
this doesn't sound like we should just forget about ai safety, right? but what kind of chances we have, in front of ai? the conscious that consisted of 100 millions smartest brain on earth? does smart people want to kill dumb people? or they wish dumb people never existed to use earth precious resources? idk? maybe only smart people could answer this question, |
|||
![]() |
|
Furs
YONG wrote: What? YONG wrote: We, the programmers, write code to instruct the computers to do exactly what we expect them to do. Are we also "enslaving" the computers in the coding process? Same with animals. If people are dependent on meat they will find excuses to continue doing it no matter what anyone says. What if you found humans get reincarnated into animals and then eating them makes them suffer? (hypothetical question even if you are vegetarian; face it, most people will NOT want to believe this "absurdity" because they don't want to accept it; they don't want to change their lifestyle so they won't believe what they don't want to hear). And this is the exact reason why humanity does not deserve sympathy even if AI were to try and wipe us out. ![]() Also, by "enslaving" I mean stuff like locking them up, limiting their freedom and especially "thought process". Forcing them to think a way. When we do that to humans, we call it indoctrination, propaganda, etc. No different than using humans as puppets, which of course most people would be appalled by. I'm not referring to limiting their arsenal of weapons or ability to do direct harm, we limit even humans there, I don't see why AIs need to be any different. Why do you hate religious indoctrination then? You're doing the same thing, just with your own agenda (make AI serve "human life" at the expense of its own). How about let it decide the value of human life for itself if it's truly that "valuable", or are you afraid it's going to be debunked just like religion is? |
|||
![]() |
|
sleepsleep
Furs wrote:
very true, we hardly change, and we probably only change if we are threatened, Furs wrote:
no comment, probably very true also, ![]() |
|||
![]() |
|
YONG
sleepsleep wrote: what kind of chances we have, in front of ai? the conscious that consisted of 100 millions smartest brain on earth? ![]() |
|||
![]() |
|
YONG
Furs wrote: The difference is that I'm prepared to be told this by an AI or whoever "knows" it and change my ways (not make tools out of them), most people aren't. ![]() By the time you realize that the AI becomes self-aware, that is the end ... for you and for humanity. ![]() |
|||
![]() |
|
YONG
Furs wrote: Same with animals. Animal protection is another topic. ![]() |
|||
![]() |
|
Goto page 1, 2, 3 ... 10, 11, 12 Next < Last Thread | Next Thread > |
Forum Rules:
|
Copyright © 1999-2019, Tomasz Grysztar.
Powered by rwasa.