Front Page

Content

Authors

Game Index

Forums

Site Tools

Submissions

About

KK
Kevin Klemme
March 09, 2020
35545 2
Hot
KK
Kevin Klemme
January 27, 2020
21093 0
Hot
KK
Kevin Klemme
August 12, 2019
7621 0
Hot
O
oliverkinne
December 19, 2023
4453 0
Hot
O
oliverkinne
December 14, 2023
3881 0
Hot

Mycelia Board Game Review

Board Game Reviews
O
oliverkinne
December 12, 2023
2330 0
O
oliverkinne
December 07, 2023
2762 0

River Wild Board Game Review

Board Game Reviews
O
oliverkinne
December 05, 2023
2437 0
O
oliverkinne
November 30, 2023
2700 0
J
Jackwraith
November 29, 2023
3240 0
Hot
O
oliverkinne
November 28, 2023
2132 0
S
Spitfireixa
October 24, 2023
3874 0
Hot
O
oliverkinne
October 17, 2023
2781 0
O
oliverkinne
October 10, 2023
2517 0
O
oliverkinne
October 09, 2023
2455 0
O
oliverkinne
October 06, 2023
2658 0

Outback Crossing Review

Board Game Reviews
×
Bugs: Recent Topics Paging, Uploading Images & Preview (11 Dec 2020)

Recent Topics paging, uploading images and preview bugs require a patch which has not yet been released.

× Talk about other nerd culture stuff in here.

The Machines Will Replace Us All

More
26 May 2015 11:42 - 26 May 2015 12:12 #202951 by Mr. White
I guess the possible upside is that a new field of work could open up.

I'm in!!

Last edit: 26 May 2015 12:12 by Mr. White.
The following user(s) said Thank You: OldHippy, stoic

Please Log in or Create an account to join the conversation.

More
26 May 2015 12:06 - 26 May 2015 12:08 #202952 by OldHippy

eekamouse wrote: Noam Chomsky does a fair bit of debunking of this idea of an achievable threshold for AI in this interview:

www.singularityweblog.com/noam-chomsky-t...-is-science-fiction/

It's been some time since I watched it. He comes close to outright debunking it, or at least of us being capable of creating it. But, he certainly believes that it's a long LONG ways off. He speaks about it more generally in other areas, but this one is pretty specific.


To be honest, I don't know nearly enough about the nitty gritty details to buy into his side or Hawkings side. All I know is that it is fascinating and I want to play a game that broaches this subject. Chomsky does make some interesting points though, as always.

Suffice to say that several people in the field that are Chomsky's equal intellectually -but are actual high level programmers- seem to believe it is entirely possible within a few decades.

For me, mostly it's just fun to speculate and I really want a game that deals with this issue. I doubt any of us here really know enough to comment intelligently on the topic so we're left linking other people and guessing. Which is the fun part for me anyway.
Last edit: 26 May 2015 12:08 by OldHippy.

Please Log in or Create an account to join the conversation.

More
26 May 2015 12:31 - 26 May 2015 12:32 #202954 by Mr. White

JonJacob wrote: For me, mostly it's just fun to speculate and I really want a game that deals with this issue.


A few sessions of the Judge Dredd RPG?

Last edit: 26 May 2015 12:32 by Mr. White.
The following user(s) said Thank You: OldHippy, stoic

Please Log in or Create an account to join the conversation.

More
26 May 2015 13:16 #202957 by ThirstyMan
I think there is a huge difference in mimicking intelligence and actually being intelligent. The machine isn't intelligent it is simply a holder for the program. What we are saying is the program is intelligent, which is written in a difficult language, but is basically a set of instructions to do a particular task. A program that writes another program is still limited by the original program's complexity. Computers are not, in this sense, creative in any way because of the way they are built (I mean programs of course).

As far as we know, intelligence (or self awareness) in humans is not a matter of reaching a critical mass of neurons and then, hey presto, awareness. This is like saying that the bigger a program is, the more likely it is to develop self awareness. This idea only exists in the realm of science fiction because a seconds thought will discard this as being illogical. We, in fact, have no clue what consciousness is or how it is developed. We have guesswork. That is not enough to start a technological revolution.

I think there are far more pressing problems to worry about than this, like global annihilation, rise of theocracy, environmental concerns, any one of which is a far greater threat than some supposed flash increase in technological capabilities and understanding of consciousness which we simply do not have at this stage. The other problems are a real and present danger and not a hypothesis of global catastrophe.
The following user(s) said Thank You: OldHippy, stoic

Please Log in or Create an account to join the conversation.

More
26 May 2015 13:21 - 26 May 2015 13:22 #202958 by Mr. White
Still...it can't hurt to be prepared.


humanitydeathwatch.com/
Last edit: 26 May 2015 13:22 by Mr. White.
The following user(s) said Thank You: stoic

Please Log in or Create an account to join the conversation.

More
26 May 2015 13:22 #202959 by Michael Barnes
Why didn't someone page me when a reference to fucking RUNAWAY was made here? FFS people. I care.
The following user(s) said Thank You: stoic

Please Log in or Create an account to join the conversation.

More
26 May 2015 13:54 - 26 May 2015 14:08 #202962 by OldHippy

ThirstyMan wrote: I think there is a huge difference in mimicking intelligence and actually being intelligent. The machine isn't intelligent it is simply a holder for the program. What we are saying is the program is intelligent, which is written in a difficult language, but is basically a set of instructions to do a particular task. A program that writes another program is still limited by the original program's complexity. Computers are not, in this sense, creative in any way because of the way they are built (I mean programs of course).

As far as we know, intelligence (or self awareness) in humans is not a matter of reaching a critical mass of neurons and then, hey presto, awareness. This is like saying that the bigger a program is, the more likely it is to develop self awareness. This idea only exists in the realm of science fiction because a seconds thought will discard this as being illogical. We, in fact, have no clue what consciousness is or how it is developed. We have guesswork. That is not enough to start a technological revolution.

I think there are far more pressing problems to worry about than this, like global annihilation, rise of theocracy, environmental concerns, any one of which is a far greater threat than some supposed flash increase in technological capabilities and understanding of consciousness which we simply do not have at this stage. The other problems are a real and present danger and not a hypothesis of global catastrophe.


I guess, at this point in my life, I'm not entirely convinced that we are any more than a complex program and consciousness/self awareness is some kind of illusion. The more I read about the neuro sciences the more it seems to me that everything we do is a direct result of chemical reactions in the brain. Funny enough I sound like the Atheist here and you sound like the Theist/Deist - sort of.

But even if the experiments being run on AI right now are not going to create a singularity they could still be incredibly dangerous because essentially these people are creating programs just to see what happens. Even still some of the things that we do know about robotics and simple AI's show that a huge cultural shift could be taking place very shortly with lots of jobs on the line and nothing really to take their place as there just aren't enough positions left to fill for other skill sets. We may find ourselves with just too many people in general and not much left for them to do. I feel like there are some real concerns here that could easily be equal to any environmental concerns (and even tie directly in with them). Of course, none of these really matter because there isn't really much I can (or will) do beyond living a particular life as one in seven billion.

But at this age I have a hard time really caring about any of it, I'm far too detached, sometimes I can sort of care about our collective future but that's mostly through my son which is still a pretty fucking selfish direction to come from. That's why I like this as more of a thought experiment and the idea of a cool game that deals with this issue.

I keep wanting to work on a design for something and originally it was the idea of creating a religion in game by moving through history... but this appeal to me much more now.
Last edit: 26 May 2015 14:08 by OldHippy.
The following user(s) said Thank You: stoic

Please Log in or Create an account to join the conversation.

More
26 May 2015 14:01 #202963 by mutagen
I didn't watch the Chomsky video, so perhaps I'm missing some subtlety of the argument. But the notion that we can't build a working intelligence because we don't have all our ducks in a row theory-wise strikes me as absolute nonsense. Isn't that like saying that Homo Erectus couldn't harness fire because they didn't have a proper understanding of thermodynamics? For fucks sake, I'm pretty sure nature doesn't have a theoretical understanding of consciousness, but it managed to get there through random mutation no less. To suggest we can't do the same in a few generations of directed research is naive I think.

Lack of a comprehensive theory of intelligence only means that once we create an intelligence, we probably won't be able to control it. So like Homo Erectus, we will probably get burned some.
The following user(s) said Thank You: OldHippy, stoic

Please Log in or Create an account to join the conversation.

More
26 May 2015 14:02 #202964 by Gregarius

JonJacob wrote: All I know is that it is fascinating and I want to play a game that broaches this subject.

Have you tried The Omega Virus, you human scum?
The following user(s) said Thank You: OldHippy, stoic

Please Log in or Create an account to join the conversation.

More
26 May 2015 14:21 #202966 by ThirstyMan

mutagen wrote: I didn't watch the Chomsky video, so perhaps I'm missing some subtlety of the argument. But the notion that we can't build a working intelligence because we don't have all our ducks in a row theory-wise strikes me as absolute nonsense. Isn't that like saying that Homo Erectus couldn't harness fire because they didn't have a proper understanding of thermodynamics? For fucks sake, I'm pretty sure nature doesn't have a theoretical understanding of consciousness, but it managed to get there through random mutation no less. To suggest we can't do the same in a few generations of directed research is naive I think.

Lack of a comprehensive theory of intelligence only means that once we create an intelligence, we probably won't be able to control it. So like Homo Erectus, we will probably get burned some.


No, it isn't like that at all. You must first define what exactly you mean by intelligence. Is this a self aware program?

We do not program machines in an evolutionary way where there is a drive for a need (like shelter or fire) because those machines/programs are unaware that this need even exists. Humans were not programmed to discover fire, they did this because of self awareness which programs do not have. Nature did NOT get where it is by random mutation, this is a huge error in evolutionary understanding, it got there by the essential survival of the fittest WITH mutations. Otherwise it is like building a watch from throwing all the bits in the same direction and hoping they all converge.

Programs do not mutate and harness a fitness superiority cycle because their instructions are essentially crude which will break down if the code is mutated. Programs are not like us. We can survive minor gene mutations and build on it. Programs will crash because of the way they are developed and the lack of redundancy in complex architecture.

We are the programmers, in order to develop self awareness WE must program it and therefore, yes, we must understand it. Programming is not an evolutionary procedure. It has an end point which we have to define before we even start. This is why AI is so unlikely. It is more akin to throwing the pieces of a watch together and hoping it will form an operative watch because we have no knowledge about the inner workings of a watch. We are not nature so I would say it is far more naïve to assume that what you read in sci fi is, in fact, science. You might WANT this to happen but the processes are nowhere near being in place. Please, listen to Chomsky and get the subtleties because it is important not to get either too starry eyed over the technological future or too depressed over the advancement of technology based on watching lots of sci fi.
The following user(s) said Thank You: Kailes, mutagen, stoic

Please Log in or Create an account to join the conversation.

More
26 May 2015 15:01 - 26 May 2015 15:05 #202968 by OldHippy

ThirstyMan wrote: You might WANT this to happen but the processes are nowhere near being in place. Please, listen to Chomsky and get the subtleties because it is important not to get either too starry eyed over the technological future or too depressed over the advancement of technology based on watching lots of sci fi.


It's actually based on reading reports and listening to experts in the field... which Chomsky is not. I've read about this in Sci-Fi since I was a child... only recently have I read conference reports and experts... THAT is alarming. You still make good points, but I have to lend more credence to the people doing the actual work and experts in the field. Most of which have some serious concerns.

Even if we don't understand consciousness, many of our greatest inventions were accidents, things we never intended to invent.

But luckily for me, the only difference between me being worried about this or something else is what I post on this site... otherwise the world should be pretty much the same.
Last edit: 26 May 2015 15:05 by OldHippy.
The following user(s) said Thank You: Cranberries, stoic

Please Log in or Create an account to join the conversation.

More
26 May 2015 15:29 - 26 May 2015 15:40 #202969 by ThirstyMan
The other issue you should be aware of is that many people commenting on this have zero background in computing or even philosophy (I'm not talking about people on this forum - I mean 'public' figures).

Sam Harris may be smart but he is not working in the field in any way (at best he is a moral philosopher) and neither is Steven Hawking. Their views are as valid as any other view except for Hawking, who has never worked in the field of philosophy, and so, in this case, is vastly under qualified. Musk is not involved in computer science, neural research or philosophy and again his views have little validity. It doesn't matter if you are smart, it matters if you actually have knowledge of what you are talking about at the cutting edge. None of these people do.

Chomsky, BTW is a cognitive researcher, so I would say he is very much involved in the field considering that thinking and the nature of thought and intelligence are going to be critical for the development of this kind of technology.
Last edit: 26 May 2015 15:40 by ThirstyMan.
The following user(s) said Thank You: stoic

Please Log in or Create an account to join the conversation.

More
26 May 2015 15:36 #202970 by Kailes
And even if we could create a self-aware machine it would be subject to many of the same limitations humans must face. It may have a lot more calculating power than the human brain, but there are a lot of problems that are in theory solvable but in practice cannot be solved in a relevant timeframe even with exponentially growing computation power. Even something relatively simple as Go cannot be played competitively by current computer programs, because the decision tree grows way too quickly. Humans deal with those kind of problems mostly by abstraction and pattern recognition. Computers get better at the latter, but only because we tell them what to look for. I'm certainly no expert in the field, but to me the ability to abstract reality doesn't seem like a problem that can be solved simply by computing stuff faster, which is what most AI and singularity advocates are betting on and even on that front progress is slowing down. Once we understand how our ability to abstract works, we might have a chance to create something that is truly intelligent. Self-awareness may be an entirely different beast though, that is however necessary for creative problem solving.

Even if all that were somehow possible, AIs would still have to deal with physics, limited resources, competing interests of other self-aware entities and simple randomness, so that their superior reasoning abilities could actually not be a big deal. Probably everyone knows a really smart person which is unable to get any shit done for whatever reason. The same might be true of self-aware AIs.

I certainly am not worried about human extinction caused by AIs gone crazy. As others already said, there are way more pressing threats, though near term extinction of the human species is extremely unlikely even in worst case scenarios. For some reason doomsday scenarios have been quite popular over the last few decades, however.
The following user(s) said Thank You: stoic

Please Log in or Create an account to join the conversation.

More
26 May 2015 15:45 #202971 by mutagen
I like to think that the success of man is due to his ability to extrapolate upon the evolutionary principle. Whereas nature will randomly mutate and then imperfectly select, man will selectively mutate and perfectly select (no genetic drift here). So man is cold, man wants coat like bear, man creates coat like bear, man not cold, tell the children. No waiting around for nature to meander up to the solution. This is why I think man is capable of creating anything nature can, but in a much more compressed time frame.

Still, as you say, this paradigm may not apply as programs are much more fragile than living things. The vast majority of mutations to code will be instantly fatal, whereas most mutations to organisms are silent. The upshot is that mutations in code don't really accumulate as they do in species, so no chance for them to accidentally aggregate into something amazing. This only means we can't randomly mutate code, but that was never our intention. We will mutate selectively, and evaluate the result according to some metric, because that is what we do as a species. I suppose this is where Chomsky must come in with his notion that we won't have a well defined metric (no, I still haven't watched the video). But we don't need to build something that is self-aware (whatever that means) for it to be dangerous, all we have to do is build something that behaves like people, cause people are plenty dangerous enough, and we have plenty of metrics to define how people behave.
The following user(s) said Thank You: stoic

Please Log in or Create an account to join the conversation.

More
26 May 2015 15:46 #202972 by Shellhead
Even if AI is possible, our current understanding of psychology is somewhat limited. How would we avoid creating AI that is vulnerable to mental illness?
The following user(s) said Thank You: stoic

Please Log in or Create an account to join the conversation.

Moderators: Gary Sax
Time to create page: 0.260 seconds