Post Reply 
whot do you guess??
Author Message
Method
You may call me Reverend.

Posts: 6,358.2856
Threads: 443
Joined: 14th Jan 2008
Reputation: 6.04241
E-Pigs: 71.3136
Offline
Post: #21
RE: whot do you guess??
I'm baffled.


[Image: mvg1hw.gif]
17/02/2009 06:51 PM
Visit this user's website Find all posts by this user Quote this message in a reply
trademark91
Unique?
Fractal Insanity

Posts: 4,719.9300
Threads: 269
Joined: 4th Jan 2008
Reputation: -6.15982
E-Pigs: 105.8691
Offline
Post: #22
RE: whot do you guess??
2months 970
6months 780

[Image: 531115][Image: 76561198014212040.png]
windows Proud
17/02/2009 06:55 PM
Find all posts by this user Quote this message in a reply
Assassinator
...

Posts: 6,646.6190
Threads: 176
Joined: 24th Apr 2007
Reputation: 8.53695
E-Pigs: 140.8363
Offline
Post: #23
RE: whot do you guess??
ZiNgA BuRgA Wrote:
Vegetano1 Wrote:prices of the 9800 were also high when the card was first presented,.
That's why you generally don't buy top-of-the-range cards >_>

That's exactly what I was trying to say. Not to buy the absolute top of the range product, which always come with a hefty price premium.
I have nothing against the card, I just have something against the price. I mean sure the card is awesome, but the price is definitely not awesome.

Wait 6 months, and you get a great price drop.

ZiNgA BuRgA Wrote:
Vegetano1 Wrote:check this and click "video processing",.>> they sure make it look worth awhile to encode with the GPU,. nvidia claims 20x faster then with cpu.
http://www.nvidia.com/content/graphicspl...index.html
I claim the CPU is 5x faster than that.  Do you believe me?
Either case they give very little information on it.

It's just marketing tricks. You should know all too well about marketing tricks.

They don't specify anything. Like what CPU vs what GPU, encoding what into what format, what settings for the encode, and all the other information that would allow you to test and confirm their results.

So they could very well compare their powerful GTX280 using their optimized encoder, doing a speed encode, against a 600MHz Pentium 2, running the most slowest inefficient encoding program you can find, doing an encode using the highest quality (slowest speed) mode. You don't know, because they don't tell you anything more than "it's 20x faster!"

I mean, hey I can make my CPU encode over 100 times faster if I turn all my reds into blues....
and change my settings from insanely slow overkill settings to max speed settings, high bitrate to low bitrate, change my fps to 1/4 of what it originally is, and do a 1 pass encode as opposed to the previous 3 pass encode.


feinicks Wrote:As I quoted above, I am more interested in the future of the GPUs. CPU are reaching a set limit on how smaller and powerful they can get, without compromising on hardware estate. There is a finite value as how small and how many cores you can fit onto a board, without increasing its size (and hence cost) and, more importantly, heat generation. Ever since the advent of discreet graphics cards, GPUs have been spared this dilemma. This is quite evident in the ever increasing sizes of the cards, to allow more efficient channeling of information and dissipation of heat.

Spared this dilemma? No chance. It's not like GPUs don't produce heat or anything. Therefor the heat problem will still exist. Simple as that. (And notice that it's the board and fans getting bigger, not really the actual chip, so that card getting bigger argument means nothing).

And like Zinga said, some stuff you just can't make parallel.
(This post was last modified: 17/02/2009 07:26 PM by Assassinator.)
17/02/2009 07:18 PM
Find all posts by this user Quote this message in a reply
roberth
Resident Full Stop Abuser.....

Posts: 4,580.2098
Threads: 200
Joined: 18th Jun 2007
Reputation: -5.5814
E-Pigs: 43.8419
Offline
Post: #24
RE: whot do you guess??
MehHakker Wrote:I'm baffled.

Indeed....when Zinga and Assassinator get into discussions i think generally wee all are GongxiGongxiGongxiHihiHihi

17/02/2009 07:53 PM
Find all posts by this user Quote this message in a reply
Mickey
Down with MJ yo

Posts: 3,663.2843
Threads: 251
Joined: 26th Apr 2008
E-Pigs: 28.7300
Offline
Post: #25
RE: whot do you guess??
.... its not any zinga/assissinator only understand talk, even i understand it

[Image: MiCk3Y.jpg]

[Image: battle.png]

Spoiler for link:
17/02/2009 08:08 PM
Find all posts by this user Quote this message in a reply
feinicks
One day... we Fly...

Posts: 6,124.6050
Threads: 531
Joined: 27th Mar 2008
Reputation: 2.35695
E-Pigs: 210817.3958
Offline
Post: #26
RE: whot do you guess??
ZiNgA BuRgA Wrote:
feinicks Wrote:Not denying that current gen encoders, or for that matter any parallel process application, are too CPU dependent. However, my argument is based on the fact that, in the future, application design may change from CPU to GPU to harness the more free and efficient power of the latter. CUDA is a an example of GPU-based instruction set. And already, the importance of CUDA has been recognised. Give it time, and you may have GPUs comparing with CPUs.
This is difficult.  A good metaphor for parallel processing is getting multiple people to do a task.  Some tasks, 240 (less trained) people can do a lot better than 4 (well trained).  However, some tasks simply won't make sense to have 240 people do it.  For example, playing a game on a computer - you can't really have 240 people doing the thing at once.  Whereas, 4 people, which are much better at doing it, will easily beat 240 people who aren't so good.

Encoding is something that is largely serial, but can be somewhat made parallel.  It's serial because a lot of aspects depend on the events occurring beforehand, ie you must do action A before action B because action B requires information obtained from action A.  You can't "parallelize" this because it requires this sort of ordering.
You might take the example of adding two 100-digit numbers.  For one person, this would take a while.  But if wee got 10, you could split up and give 10 digits to each person, then do a "merge" at the end.  It's not perfect, as a carry can potentially make things a lot slower, but in most cases, it's somewhat like "parallizing" a serial task.

See also: http://en.wikipedia.org/wiki/GPGPU#Misconceptions

You are missing the point. I am not saying that at this moment, GPU as they are, can replace the processing prowess of CPU. Processing is inevitably directly related to the no. of transistors available. GPUs now exceed the complexity of modern CPUs in terms of absolute transistor count. And like CPUs, they're becoming programmable-- it's possible to harness all that graphics power to do something other than graphics. So while they may not be fully optimised for serial processing as yet, they already have started to move in that direction. Further, customised hardware in a GPU allows Stream processing. If anyone doesn't know, stream processors are extremely powerful floating-point processors able to process whole blocks of data at once, whereas CPUs carry out only a handful of numerical operations at a time. Wee've seen CPUs implement some stream processing with instruction sets like SSE and 3DNow!, but these efforts pale in comparison to what custom hardware has been able to do.

CUDA is one step in that direction. My point being that in the future, this will be something that will be very interesting to observe.

Assassinator Wrote:
ZiNgA BuRgA Wrote:
Vegetano1 Wrote:prices of the 9800 were also high when the card was first presented,.
That's why you generally don't buy top-of-the-range cards >_>

That's exactly what I was trying to say. Not to buy the absolute top of the range product, which always come with a hefty price premium.
I have nothing against the card, I just have something against the price. I mean sure the card is awesome, but the price is definitely not awesome.

Wait 6 months, and you get a great price drop.

that is the most basic of purchase logic. Unless you (you being generic) belong to those few ultra rich Enthusiasts who need to have the latest and the greatest.

Assassinator Wrote:
ZiNgA BuRgA Wrote:
Vegetano1 Wrote:check this and click "video processing",.>> they sure make it look worth awhile to encode with the GPU,. nvidia claims 20x faster then with cpu.
http://www.nvidia.com/content/graphicspl...index.html
I claim the CPU is 5x faster than that.  Do you believe me?
Either case they give very little information on it.

It's just marketing tricks. You should know all too well about marketing tricks.

lol


Assassinator Wrote:So they could very well compare their powerful GTX280 using their optimized encoder, doing a speed encode, against a 600MHz Pentium 2, running the most slowest inefficient encoding program you can find, doing an encode using the highest quality (slowest speed) mode. You don't know, because they don't tell you anything more than "it's 20x faster!"

I mean, hey I can make my CPU encode over 100 times faster if I turn all my reds into blues....
and change my settings from insanely slow overkill settings to max speed settings, high bitrate to low bitrate, change my fps to 1/4 of what it originally is, and do a 1 pass encode as opposed to the previous 3 pass encode.

But then how is it default? GPU have more processing power, though not much from software side to harness it.

Assassinator Wrote:
feinicks Wrote:As I quoted above, I am more interested in the future of the GPUs. CPU are reaching a set limit on how smaller and powerful they can get, without compromising on hardware estate. There is a finite value as how small and how many cores you can fit onto a board, without increasing its size (and hence cost) and, more importantly, heat generation. Ever since the advent of discreet graphics cards, GPUs have been spared this dilemma. This is quite evident in the ever increasing sizes of the cards, to allow more efficient channeling of information and dissipation of heat.

Spared this dilemma? No chance. It's not like GPUs don't produce heat or anything. Therefor the heat problem will still exist. Simple as that. (And notice that it's the board and fans getting bigger, not really the actual chip, so that card getting bigger argument means nothing).

And like Zinga said, some stuff you just can't make parallel.

You misunderstand. I never said that GPU don't produce heat. The dilemma refers to size of CPU. Also, though the size of the GPU chip remains the same, its transistor count has by far overshot that of the CPU. This requires more cooling, hence the bigger board.
Yes, probably not now. But already work is on towards that area. Also the fact that GPU ccan act as Stream processors...

◄◄••• 天使たちの夢か? •••►►

[Image: ewualizer.gif]
My works!
17/02/2009 10:01 PM
Find all posts by this user Quote this message in a reply
ZiNgA BuRgA
Smart Alternative

Posts: 17,022.2988
Threads: 1,174
Joined: 19th Jan 2007
Reputation: -1.71391
E-Pigs: 446.1274
Offline
Post: #27
RE: whot do you guess??
feinicks Wrote:You are missing the point. I am not saying that at this moment, GPU as they are, can replace the processing prowess of CPU. Processing is inevitably directly related to the no. of transistors available. GPUs now exceed the complexity of modern CPUs in terms of absolute transistor count. And like CPUs, they're becoming programmable-- it's possible to harness all that graphics power to do something other than graphics. So while they may not be fully optimised for serial processing as yet, they already have started to move in that direction. Further, customised hardware in a GPU allows Stream processing. If anyone doesn't know, stream processors are extremely powerful floating-point processors able to process whole blocks of data at once, whereas CPUs carry out only a handful of numerical operations at a time. Wee've seen CPUs implement some stream processing with instruction sets like SSE and 3DNow!, but these efforts pale in comparison to what custom hardware has been able to do.

CUDA is one step in that direction. My point being that in the future, this will be something that will be very interesting to observe.
I seriously doubt GPUs will ever beat CPUs in terms of serial processing, with the current way things are going.  Of course, with Intel and AMD planning to integrate graphics processing onto CPUs, things may change direction, but on the current path, a CPU will always run a serial task much faster than a GPU can.  Don't confuse raw processing power with ability to harness it - from a programmer's perspective, mass parallelism is very difficult to achieve with many applications, so this isn't just really a "wee don't have the applications today" issue.
I'm not digging CUDA much.  For one, ATI doesn't support it, and unless nVidia squeeze them out of the market, it's going to be difficult to adopt to such a solution.
17/02/2009 10:17 PM
Visit this user's website Find all posts by this user Quote this message in a reply
feinicks
One day... we Fly...

Posts: 6,124.6050
Threads: 531
Joined: 27th Mar 2008
Reputation: 2.35695
E-Pigs: 210817.3958
Offline
Post: #28
RE: whot do you guess??
ZiNgA BuRgA Wrote:
feinicks Wrote:You are missing the point. I am not saying that at this moment, GPU as they are, can replace the processing prowess of CPU. Processing is inevitably directly related to the no. of transistors available. GPUs now exceed the complexity of modern CPUs in terms of absolute transistor count. And like CPUs, they're becoming programmable-- it's possible to harness all that graphics power to do something other than graphics. So while they may not be fully optimised for serial processing as yet, they already have started to move in that direction. Further, customised hardware in a GPU allows Stream processing. If anyone doesn't know, stream processors are extremely powerful floating-point processors able to process whole blocks of data at once, whereas CPUs carry out only a handful of numerical operations at a time. Wee've seen CPUs implement some stream processing with instruction sets like SSE and 3DNow!, but these efforts pale in comparison to what custom hardware has been able to do.

CUDA is one step in that direction. My point being that in the future, this will be something that will be very interesting to observe.
I seriously doubt GPUs will ever beat CPUs in terms of serial processing, with the current way things are going.  Of course, with Intel and AMD planning to integrate graphics processing onto CPUs, things may change direction, but on the current path, a CPU will always run a serial task much faster than a GPU can.  Don't confuse raw processing power with ability to harness it - from a programmer's perspective, mass parallelism is very difficult to achieve with many applications, so this isn't just really a "wee don't have the applications today" issue.
I'm not digging CUDA much.  For one, ATI doesn't support it, and unless nVidia squeeze them out of the market, it's going to be difficult to adopt to such a solution.

Probably not anytime soon. Especially seeing that even Intel is now trying to make a self sufficient CPU (though imo, that is trying to fight an un-winnable war). But I think that is where Processor technology is headed. Wee may even see a Combined Processing Unit or something of the sort, soon enough. This is inevitable as the next major research area is real 3D display. Something that is beyond the current capacity of the CPU or the GPU (the CPU lacking stream and intensive parallel processing, while GPU lacking basic serial).

◄◄••• 天使たちの夢か? •••►►

[Image: ewualizer.gif]
My works!
17/02/2009 10:57 PM
Find all posts by this user Quote this message in a reply
Vegetano1
$urf

Posts: 9,083.2507
Threads: 397
Joined: 2nd Mar 2007
Reputation: 6.06988
E-Pigs: 2756.6280
Offline
Post: #29
RE: whot do you guess??
ZiNgA BuRgA Wrote:
feinicks Wrote:You are missing the point. I am not saying that at this moment, GPU as they are, can replace the processing prowess of CPU. Processing is inevitably directly related to the no. of transistors available. GPUs now exceed the complexity of modern CPUs in terms of absolute transistor count. And like CPUs, they're becoming programmable-- it's possible to harness all that graphics power to do something other than graphics. So while they may not be fully optimised for serial processing as yet, they already have started to move in that direction. Further, customised hardware in a GPU allows Stream processing. If anyone doesn't know, stream processors are extremely powerful floating-point processors able to process whole blocks of data at once, whereas CPUs carry out only a handful of numerical operations at a time. Wee've seen CPUs implement some stream processing with instruction sets like SSE and 3DNow!, but these efforts pale in comparison to what custom hardware has been able to do.

CUDA is one step in that direction. My point being that in the future, this will be something that will be very interesting to observe.
I seriously doubt GPUs will ever beat CPUs in terms of serial processing, with the current way things are going.  Of course, with Intel and AMD planning to integrate graphics processing onto CPUs, things may change direction, but on the current path, a CPU will always run a serial task much faster than a GPU can.  Don't confuse raw processing power with ability to harness it - from a programmer's perspective, mass parallelism is very difficult to achieve with many applications, so this isn't just really a "wee don't have the applications today" issue.
I'm not digging CUDA much.  For one, ATI doesn't support it, and unless nVidia squeeze them out of the market, it's going to be difficult to adopt to such a solution.

- nvidia claiming video processing 20x faster with gpu then cpu is not true,..
- programs using graphics abbility's like with CUDA is't worth it,.!? (eg in CS4)
http://www.nvidia.com/content/graphicspl...index.html

- future cpu's will take over gpu's job's>!? no more graphics cards,.>!?

Anyway whot i do understand,. if there is a posibility that CUDA is worth it and encoding with GPU is faster,.. then buying a 295gtx(or any other faster x.x card) is worth it,. because you might not see the difference in game fps speeds (dif between 60 fps and 120 fps,. altough i doubt highest settings in crysis with 295gtx will give you 120 fps) but you will notice difference working with CUDA and GPU encoding times,.

ZiNgA BuRgA Wrote:from a programmer's perspective, mass parallelism is very difficult to achieve with many applications, so this isn't just really a "wee don't have the applications today" issue.
Blur ehhh so basicly your saying,.. it is a "wee don't have the application today issue",. >!?>? because its difficult but not imposible,. so these program will appear,.  ;)

ATI,.. seems to be 1 step behind nvidia,.. or maybe its nvidia exclusive,. why should ATI have to suport a CUDA if nvidia invented CUDA,. Blur


Make loads of $$!! it wurks!!
[Image: csbanner_anim_03.gif]
Signed Homebrew by bsanehi & OMightyBuggy
http://endlessparadigm.com/forum/showthr...?tid=25707
Spoiler for My miniBlog:
17/02/2009 11:16 PM
Visit this user's website Find all posts by this user Quote this message in a reply
feinicks
One day... we Fly...

Posts: 6,124.6050
Threads: 531
Joined: 27th Mar 2008
Reputation: 2.35695
E-Pigs: 210817.3958
Offline
Post: #30
RE: whot do you guess??
Vegetano1 Wrote:
ZiNgA BuRgA Wrote:
feinicks Wrote:You are missing the point. I am not saying that at this moment, GPU as they are, can replace the processing prowess of CPU. Processing is inevitably directly related to the no. of transistors available. GPUs now exceed the complexity of modern CPUs in terms of absolute transistor count. And like CPUs, they're becoming programmable-- it's possible to harness all that graphics power to do something other than graphics. So while they may not be fully optimised for serial processing as yet, they already have started to move in that direction. Further, customised hardware in a GPU allows Stream processing. If anyone doesn't know, stream processors are extremely powerful floating-point processors able to process whole blocks of data at once, whereas CPUs carry out only a handful of numerical operations at a time. Wee've seen CPUs implement some stream processing with instruction sets like SSE and 3DNow!, but these efforts pale in comparison to what custom hardware has been able to do.

CUDA is one step in that direction. My point being that in the future, this will be something that will be very interesting to observe.
I seriously doubt GPUs will ever beat CPUs in terms of serial processing, with the current way things are going.  Of course, with Intel and AMD planning to integrate graphics processing onto CPUs, things may change direction, but on the current path, a CPU will always run a serial task much faster than a GPU can.  Don't confuse raw processing power with ability to harness it - from a programmer's perspective, mass parallelism is very difficult to achieve with many applications, so this isn't just really a "wee don't have the applications today" issue.
I'm not digging CUDA much.  For one, ATI doesn't support it, and unless nVidia squeeze them out of the market, it's going to be difficult to adopt to such a solution.

- nvidia claiming video processing 20x faster with gpu then cpu is not true,..
- programs using graphics abbility's like with CUDA is't worth it,.!? (eg in CS4)
http://www.nvidia.com/content/graphicspl...index.html

- future cpu's will take over gpu's job's>!? no more graphics cards,.>!?

Anyway whot i do understand,. if there is a posibility that CUDA is worth it and encoding with GPU is faster,.. then buying a 295gtx(or any other faster x.x card) is worth it,. because you might not see the difference in game fps speeds (dif between 60 fps and 120 fps,. altough i doubt highest settings in crysis with 295gtx will give you 120 fps) but you will notice difference working with CUDA and GPU encoding times,.

CPU and GPU will never be entirely integrated (despite my interest in that, I do not believe it will happen). At most, the shift will changfe from CPU to GPU, with GPU being the major application processor and CPU being the basic processor.

Vegetano1 Wrote:
ZiNgA BuRgA Wrote:from a programmer's perspective, mass parallelism is very difficult to achieve with many applications, so this isn't just really a "wee don't have the applications today" issue.
Blur ehhh so basicly your saying,.. it is a "wee don't have the application today issue",. >!?>? because its difficult but not imposible,. so these program will appear,.  ;)

ATI,.. seems to be 1 step behind nvidia,.. or maybe its nvidia exclusive,. why should ATI have to suport a CUDA if nvidia invented CUDA,. Blur

Exactly my point. Moreover, ATI has already started working on a technology (or rather, architecture) similar to CUDA.



this is a good watch..

Source: Google Video


watch the 38 min version:

http://www.charlierose.com/view/interview/10060

◄◄••• 天使たちの夢か? •••►►

[Image: ewualizer.gif]
My works!
18/02/2009 12:12 AM
Find all posts by this user Quote this message in a reply
Post Reply 


Forum Jump:


User(s) browsing this thread: 5 Guest(s)

 Quick Theme: