Scrolling Game Development Kit Forum
General => Off-Topic => Topic started by: SmartBoy16 on 2009-10-18, 08:57:37 PM
-
another bot
What did he/she do this time?
going off track about graphic cards. its kind of obvious that its a bot. there probably a few still floating around.
@ bluemonk: i think a majority of bots had odd or invalid ICQ numbers. you may want to check some of them out.
-
There was only one other account without any posts that had an invalid ICQ number. (So I deleted it)
-
Oh. When will they ever learn? :laugh:
-
Oh. When will they ever learn? :laugh:
I should hope that bots never gain sentience!
-
Oh. When will they ever learn? :laugh:
I should hope that bots never gain sentience!
Oh, I get it! That would be bad. But not so much as a bad game or movie idea.
-
Computers will supposedly grow smarter than humans within this lifetime (maybe 30 years). Then they could start designing themselves and really take off. I just hope the good ones have an edge over the bad ones. It didn't occur to me until just now that there might also be bad ones.
-
@ bluemonkmn: By "good" and "bad" ones, you mean "good" and "evil" or "useful" and "unuseful" computers? I mean, good and evil are purely human concepts and very difficult to define. I bet most humans wouldn't be able to define them, so even less accept a precise definition. It's one of those things for which you get a gut feeling that tells you if something is good or evil. Besides, good and evil change for each culture and for each time period. How could a computer comprehend that? :surprise:
If you mean useful and unuseful, to see useful computers do things and unuseful ones undo those same things: that might may lead to some laughs! :) (and some headaches besides!)
-
I meant good and evil, and by evil I meant against the general will of humanity. In other words, spam/bots are evil and if our super smart computers are dedicated to that task ... argh!
-
I had a graph that showed the growth of supercomputer complexity with a line demarcating what it would take to simulate a mind neuron by neuron in real time. We are surprisingly close to that line.
-
Close to that line? We have a couple orders of magnitude to go yet, I think, but we are closer than a lot of people might suspect chronologically. And if you use a logarithmic graph (which makes sense since technology seems to grow exponentially) me might look close that way too.
-
I can't remember moore's law exactly, but it was something like the amount of transistors on computer hardware will double every one and a half years... and the trend has been going that way for over a decade now.
-
Well, Moore's law isn't dedicated specifically to the number of transistors on a chip, but more generally that computing speed/power will double every 18 months. Lately, some chip manufacturers have been saying that they've made chips as small and packed as they can right now, so for Moore's law to continue to apply, computing power has to increase some other way, so they're putting more cores in one CPU. The problem with that is that most programs and programmers do not take advantage of multi-core processors (yet).
-
Strictly speaking, Moore's Law does refer to the number of transistors: http://en.wikipedia.org/wiki/Moore's_law
And you know geeks -- we tend to speak very strictly ;)
-
There is an attempt being made now to simulate the cortex of a brain in silicon.
http://bluebrain.epfl.ch/
While we are several years out from running a complete brain in silicon we are closer now than you would think. I recall the excitement of breaking a teraflop, now I can do the same with my video cards using the CUDA toolkit. Somehow it ended up being the future without me noticing it.
-
Strictly speaking, Moore's Law does refer to the number of transistors: http://en.wikipedia.org/wiki/Moore's_law
And you know geeks -- we tend to speak very strictly ;)
I didn't know that. I just know that lately I've been hearing (reading) that for computing power to continue to double every 18 months, they've had to move away from making smaller, more compact chips.
-
It appears that 40nm is the limit on process size, any lower and quantum effects become a problem e.g. tunneling. The trouble is that granularity is a problem with coding for multiple cores. Some problems do not parallelize(sic) well.
-
It appears that 40nm is the limit on process size, any lower and quantum effects become a problem e.g. tunneling. The trouble is that granularity is a problem with coding for multiple cores. Some problems do not parallelize(sic) well.
I disagree, there is already 32nm graphics cards on the market, and intel plans on releasing 32nm cpu's soon enough.
-
there will be another technique to replace the current technology. i heard of some tests and theories a few years ago. alas, i can't recall the name.
but, i think it is for sure, this computer power/size/speed development will go on.
-
We'll start using the space offered in parallel dimensions. You know, like mice. "Hitchhiker's Guide to the Galaxy Fans" know that mice are the most advanced beings in the universe, but only a small portion of their being protrudes into this universe :). I think that's how it worked.
-
We'll start using the space offered in parallel dimensions. You know, like mice. "Hitchhiker's Guide to the Galaxy Fans" know that mice are the most advanced beings in the universe, but only a small portion of their being protrudes into this universe :). I think that's how it worked.
NERD!
Oh... wait
-
I had heard that the 32nm process didn't scale well, but it appears they've found a fix...
Wait, isn't quantum processing based in part on the idea that all the processors exist in the multiverse simultaneously and the first one to get the correct answer returns the value to all the processors? This would allow for a MASSIVELY parallel brute force attempt.