Allegro.cc - Online Community

Allegro.cc Forums » Programming Questions » [DirectX 10]Is anyone following it?

This thread is locked; no one can reply to it. rss feed Print
 1   2 
[DirectX 10]Is anyone following it?
Thomas Harte
Member #33
April 2000
avatar

Quote:

Also, DX10 introduces virtual memory on the GPU. This isn't so immediately interesting for games, other than it means alt+tab will finally work automatically, but it has big implications for multiple windowed applications cooperating to share the GPU (kind of important when the OS itself wants to use GPU effects for menus and so on).

DirectX is a really stupid place to implement VRAM virtualisation if Microsoft are serious about keeping up with everyone else's GUI advancements after their Vista catch up. Is there a reason, other than the apparent incompetence of management over OS development (source)?

OS X already does virtualisation for all OS systems. I guess the upcoming GL based X servers & window managers end up doing much the same due to GL's hands off VRAM management.

Bob
Free Market Evangelist
September 2000
avatar

Quote:

The problem with every release of DirectX providing a new common base is that no one wants to go out and buy a new top-of-the-line, $500 video card every year with the new release of DirectX.

You don't need a $500 video card to get access to newer versions of DirectX. Low end GPUs will implement D3D10 (or 11, etc) at reduced speed. All the required features will be there, though.

--
- Bob
[ -- All my signature links are 404 -- ]

relpatseht
Member #5,034
September 2004
avatar

Even the lower end cards will probably cost around $150 when they first come out. Of course, they will go down to about $70 after a few months. Still, I don't think it is a good thing to demand hardware upgrades with every release.

Thomas Fjellstrom
Member #476
June 2000
avatar

Quote:

One day everyone will have DX10.

Highly doubtfull. My dad doesnt even have DX 8 afaik. His fiance's has 9.0c... but its not nearly capable of much of anything being a 450 with 128MB ram.

People have very little reason to upgrade again to a new OS version for $300+. Except that MS is forcing the issue yet again, by making APIs require specific versions of windows updates. Isn't that a little unethical?

And I have even less reason to buy a copy of Vista. I haven't ever owned a real copy of any windows version, the only legal access I had at one point was a lame NEC rescue CD that would only work on the machine it came with (unethical again I think, the software was payed for, should have been alowed to use it on my next computer, but nooo, MS and OEMs like to rip off thier customers).

Basically, I have never owned any MS software, and I'm not about to start. I've been treated like a dirty criminal by MS from day one. So I use alternative OS instead, and I actually prefer developing in Unix (like) OSs much more than any windows version you care to throw at me. The software costs WAY too much, and without a 1000+ dollar licence for MSVC you don't get any really useful features for developing real software in windows.

At this point, even if I do manage to start developing some commercial software, I'll only port it to windows after its done, if it makes sense.

--
Thomas Fjellstrom - [website] - [email] - [Allegro Wiki] - [Allegro TODO]
"If you can't think of a better solution, don't try to make a better solution." -- weapon_S
"The less evidence we have for what we believe is certain, the more violently we defend beliefs against those who don't agree" -- https://twitter.com/neiltyson/status/592870205409353730

Goalie Ca
Member #2,579
July 2002
avatar

everyone will have 10.0 when the new hotness is 12.9

-------------
Bah weep granah weep nini bong!

Richard Phipps
Member #1,632
November 2001
avatar

Ironically, if writing shaders becomes ever more important we could see a return of the demo crew style 'hacks'. By this I mean that rather than focussing on how many polygons can be rendered, the trick is to do clever writing with the shaders to create cool new effects. ;)

Jakub Wasilewski
Member #3,653
June 2003
avatar

Quote:

That's where DX10 helps. The high end is still for you to solve, but that's ok: this is where game engine programmers can differentiate and try to sell their product, so it's a good place to spend their time. The point is that DX10 (will someday) give a constant platform to remove all the irritating crap around the low end support.

OK, I think we can agree on that point. This will only become viable if the GPU manufacturers manage to create hardware exactly up to the specification, and if the DX10 specification itself will be strict and verbose enough, but it's a good chance.

---------------------------
[ ChristmasHack! | My games ] :::: One CSS to style them all, One Javascript to script them, / One HTML to bring them all and in the browser bind them / In the Land of Fantasy where Standards mean something.

Shawn Hargreaves
The Progenitor
April 2000
avatar

The thing I think a lot of you are missing is that the lowest common denominator thing isn't really about cutting edge 3D games.

For high end games, sure, hardware improves all the time, and new DX releases are needed to keep up with this. That's one universe.

But there is another universe. That is the universe my parents live in, where they bought a computer 3 years ago, will probably replace it in 2 or 3 more years, and don't have a clue what kind of GPU they have (or even what a GPU is).

Most people on allegro.cc live in the first universe, but the vast majority of people on the planet are part of the second. These people don't play Doom or Battlefield. They spend most of their time using a web browser, or writing email, or using Word or Excel, or playing games like Freecell or Poker or Bejeweled. There are a lot of these people out there, and they play a lot of games: way way way more hours go into Freecell than all the Doom and Battlefield players combined!

And yet today, the graphics industry has nothing to offer this mainstream market. Yearly rev cycles are a nightmare for games like Freecell! So today, Freecell doesn't use the GPU at all. It just sticks with a lowest common denominator that was established sometime back in the mid 80's. And the vast majority of computer programs in the world are the same.

So the problem isn't really one of yearly rev cycles, it is how do we take all these apps that haven't advanced their graphics technology since 1985, and convince them to start using 2005 era technology?

Personally I think that is a very interesting problem to try to solve (it poses a bunch of really hard issues that are very different to the sort of things high end game developers typically worry about), and one that can have a big impact on changing the way software looks and feels.

Mainstream software just doesn't move very fast.

Once it was text mode.
Then it became raster graphics.
(someday) Then DX10 made the GPU accessible to everyone.

These big shifts only happen once every decade or more - it is a whole different scale of thinking to the high end game market.

(note: the idea isn't that apps would actually code directly to DX10. DX is a low level API that just exposes the GPU caps: other higher level abstractions will be used to write the actual UI code. For instance Flash could easily be accelerated using DX10 hardware. And Microsoft is working on the WPF layer (formerly known as Avalon). I'm sure there will be many others interested in taking advantage of richer rendering capabilities...)

Quote:

Even the lower end cards will probably cost around $150 when they first come out. Of course, they will go down to about $70 after a few months.

One thing I've learned about MS is that they think long term. The goal isn't the $70 cards after a few months: it is the situation a few years away when even the crappiest integrated motherboard video hardware will support DX10 shaders. That's really not as far fetched as it may currently seem, but it is a much longer lead plan than just affecting the product cycle for the next holiday season.

Quote:

Ironically, if writing shaders becomes ever more important we could see a return of the demo crew style 'hacks'. By this I mean that rather than focussing on how many polygons can be rendered, the trick is to do clever writing with the shaders to create cool new effects. ;)

That is already happening to a major extent. It's been two or three years since I've seen anyone seriously worrying about how to maximize polygon throughput, and most games aren't using any more polygons now than they were a couple of years ago. The extra resources are going into longer shaders and more passes, drawing the same polygons in a better way.

Quote:

OK, I think we can agree on that point. This will only become viable if the GPU manufacturers manage to create hardware exactly up to the specification, and if the DX10 specification itself will be strict and verbose enough, but it's a good chance.

Believe me, the spec is very verbose :-) Not the API spec (that's written to be readable by normal users) but the hardware/driver interface spec is about 15x longer than that for any previous DX version!

Bob
Free Market Evangelist
September 2000
avatar

Quote:

Believe me, the spec is very verbose :-) Not the API spec (that's written to be readable by normal users) but the hardware/driver interface spec is about 15x longer than that for any previous DX version!

Indeed. The main reason for that is some people pushing Microsoft to actually write a spec instead of having said people try to figure out what the refrast was trying to achieve.

--
- Bob
[ -- All my signature links are 404 -- ]

Carrus85
Member #2,633
August 2002
avatar

Quote:

One day everyone will have DX10.

Heh, hardly. For example, I'm currently stuck with a GeForce 440 Go MX 64MB Card, with NO POSSIBLE WAY OF UPGRADING. (Hooray for laptop video cards! If you can upgrade these, please let me know, although I'm pretty sure you can't without either invalidating your warranty or doing some huge degree of hardware solder/desoldering.) Basically, I'm stuck at a DirectX 7/8 level. No shader support whatsoever. Which is kinda sad, considering it is a 3400+ AMD 64bit processor...

HoHo
Member #4,534
April 2004
avatar

Quote:

NO POSSIBLE WAY OF UPGRADING.

Will you use the same laptop in five years?

__________
In theory, there is no difference between theory and practice. But, in practice, there is - Jan L.A. van de Snepscheut
MMORPG's...Many Men Online Role Playing Girls - Radagar
"Is Java REALLY slower? Does STL really bloat your exes? Find out with your friendly host, HoHo, and his benchmarking machine!" - Jakub Wasilewski

Murat AYIK
Member #6,514
October 2005
avatar

People criticise MACs and not laptops! Is it too hard to make something changable which looks like a BIOS or those old VGA-RAMs? Anyway, the strategy behind DX10 is very nice, I hope it influences the hardware designers.

_____________________________________________________
"The world doesn't care about what storms you sailed through, it is interested in whether you brought the ship to the dock or not!"

Jakub Wasilewski
Member #3,653
June 2003
avatar

Quote:

Not the API spec (that's written to be readable by normal users) but the hardware/driver interface spec is about 15x longer than that for any previous DX version!

That's great :). Where there is no leeway in the specification, there is no room for two different interpretations by GPU manufacturers, and everyone is happy.

---------------------------
[ ChristmasHack! | My games ] :::: One CSS to style them all, One Javascript to script them, / One HTML to bring them all and in the browser bind them / In the Land of Fantasy where Standards mean something.

Bob
Free Market Evangelist
September 2000
avatar

Quote:

and everyone is happy.

... except for users, OEMs and GPU manufacturer, of course. Some leeway is good. It promotes innovation. It means that new image quality improvement features can be applied to old applications / games. Plus, no one can write iron-clad specs, not even Microsoft. And, you don't want to design (or pay for) hardware to be bug for bug compatible with DX10 forever.

--
- Bob
[ -- All my signature links are 404 -- ]

Jakub Wasilewski
Member #3,653
June 2003
avatar

Yeah, some leeway is good, but not in everything. We don't want creativity in the "trying to figure out what the author of the specification meant" way. The places where improvements should be allowed should be taken into account when writing the specification. We don't want "improvements" that just mean that every single card implements something differently, and something that looks good on 90% of cards looks like crap on the remaining 10% that decided to understand the specification otherwise.

Also "bug for bug" compatibility is not what I'd like to see. There is a specification. If a driver does not adhere to it, it's the driver manufacturer's fault and should be corrected. If DX itself does something not as it is stated in the specification, it should be corrected.

Of course, reality doesn't always pose us with such a clear-cut situation, but I will still claim that a better prepared, longer specification will allow for better compatibility amongst GPUs and won't halt innovation.

However, I see how I'm not the most experienced or informed on the topic amongst us (seeing that you work for nVidia, and Shawn works on DX), so my opinion might not be very valuable... but I know how painful it is to struggle against incompatibilities between different vendors implementations of the same thing (IE vs the rest of the world, various GPUs with OpenGL).

---------------------------
[ ChristmasHack! | My games ] :::: One CSS to style them all, One Javascript to script them, / One HTML to bring them all and in the browser bind them / In the Land of Fantasy where Standards mean something.

HoHo
Member #4,534
April 2004
avatar

Quote:

Some leeway is good. It promotes innovation. It means that new image quality improvement features can be applied to old applications / games.

That's why OpenGL and it's extensions are good for :)

__________
In theory, there is no difference between theory and practice. But, in practice, there is - Jan L.A. van de Snepscheut
MMORPG's...Many Men Online Role Playing Girls - Radagar
"Is Java REALLY slower? Does STL really bloat your exes? Find out with your friendly host, HoHo, and his benchmarking machine!" - Jakub Wasilewski

Shawn Hargreaves
The Progenitor
April 2000
avatar

Just to clarify: I don't actually work on DX, I'm just pretty involved with the people who do.

I would say that leeway to extend a spec is good, but leeway to not implement parts of it, or to implement parts of it differently, is very bad.

Example: it's cool that Intel can add SSE2 to their instruction set. But once things have been added and standardized, they should be left alone. For instance it would be a nightmare if they suddenly decided to drop the CMOV instruction! Or even worse, to change the way it sets the carry flag.

A healthy platform is one where you have a nice solid base of stuff that you know will be there and you know will always work the same way. Then if you want to use extra features, you can check for them, but 99% of your code doesn't care about those new features so it can be kept nice and clean and robust.

Goalie Ca
Member #2,579
July 2002
avatar

I have to agree with Shawn. A good well though out standard makes everyone happy. Posix is an excellent example of this. Same with the building code, electrical code, etc. Then of course there are accepted practices which all engineers are held accountable to.

One thing microsoft has done right was legacy compatibility. They've run into so many problems though because of the monolithic design and the hacked interfaces. It's like they "use case" designed everything. In the programs there's literally a button for everything and a function for everything. Especially in the case of .NET, if its not in winforms good luck trying to get it working. .NET is a classic example of microsoft standards. They didn't think anything through, and now they're left to support it for a gazillion years while they get ready to release a newer/bigger/better library in the meanwhile.

A cleaner, more modular design is far more future proof. As an engineering student, i find it hard to believe that people can't sit down and come up with a good standard. The engineers most certainly knowing what's coming up. I would have to pin the problem though on management because investors and companies only ever look short term. Not many people think of long term and sustainability.

edit: I also forgot to mention amd and intel. They've done an excellent job so far. It's also nice to see amd64 take some advantages of the completely new mode. One thing that concerns me about apple is that they should have gone straight to 64-bit intel chips so they wouldn't have to support 32/64-bit ppc's, 32/64-bit intels. Luckily though, Next was well though out and come with universal binary capabilities among many other things. Next was really, an engineering marvel from an OS point of view. So much remains unchanged from all that time ago.

-------------
Bah weep granah weep nini bong!

Thomas Harte
Member #33
April 2000
avatar

Goalie Ca said:

One thing microsoft has done right was legacy compatibility. They've run into so many problems though because of the monolithic design and the hacked interfaces.

Yes, I think they missed a golden opportunity with Windows 95. There may not have been time to come up with a clean new codebase, but as there was going to be an API break whatever they did they should have taken the opportunity to make it a much more severe one.

In a way I prefer the Apple Classic to OS X transition - PowerPC OS X had Classic compatibility and to an Classic apps can be run on PowerPC OS X, even if they don't interact with the new OS in all the expected ways, but the process isn't exactly speedy. OS X meanwhile contains some completely new APIs, some "this is what we learnt with NextStep" APIs and a trimmed and consolidated version of the old Classic APIs. When everyone knew OS X was coming they could chose to stick to the consolidated libraries and release binaries that worked completely natively in both Classic and OS X. After OS X came out, Classic remained the OS supplied as the default boot on all new hardware for a couple of years, until OS X native versions of all the major apps were available and most of the major OS X bugs and issues had been remedied.

That said, as Apple control both the hardware and the software and really only have to worry about keeping Adobe (for Photoshop) and Microsoft (for Office) abreast of new OS developments, they can very easily adopt a tiered transition like this whereas Microsoft almost certainly couldn't. They need big bang launches of new OSs to properly control how hardware and software are bundled.

Shawn Hargreaves said:

So the problem isn't really one of yearly rev cycles, it is how do we take all these apps that haven't advanced their graphics technology since 1985, and convince them to start using 2005 era technology?

Maybe persuade them to port to Mac OS?

Shawn Hargreaves said:

One thing I've learned about MS is that they think long term.

Indeed - that's why their rendering technology is still based on 1985 paradigms.

HoHo said:

Will you use the same laptop in five years?

I'm now using one that is 53 months old and says (c) 2001 on the base. I'm not "still" using it though because I've only had it a year and a bit, replacing my slightly older desktop.

Murat AYIK said:

People criticise MACs and not laptops! Is it too hard to make something changable which looks like a BIOS or those old VGA-RAMs?

Intel promote Extensible Firmware Interface which is a substantial step up from the olde BIOS, but doesn't do anything like a full 3d API. It does mean you can dump VGA compatibility (and 8086 real mode) though. Microsoft were to support EFI with Vista but have suddenly decided they aren't going to.

Prior to that, Sun had invented Open Firmware - also used by IBM and Apple/PowerPC - which was processor neutral and took the mini firmware drivers as a p-code based on Forth. That also worked in a substantially smarter way than the BIOS but I don't think was ever implemented by anyone alongside x86 processors.

HoHo
Member #4,534
April 2004
avatar

Quote:

They've done an excellent job so far.

I disagree with that a bit. Using basically the same instruction set as 25y ago is not very efficient. Sure, it gives really nice backwards compability but efficiency suffers because of that. SIMD stuff is fun and all but I think current CPU's have too little power in those units.

Of cource with multicore architectures it gets a little simpler. E.g they don't have to use so many transistors for branch predictors, just cram another core into the package and net win will probably be greater.

Also it's too bad AMD only doubled the register counts when it created the 64bit aarchitecture. I'm not sure how expensive it is to quadruple it but it surely would have made compilers job way easier if they had done it :)

One possibly interesting future development might be AMD new plans of putting CPU's and random chips in directly connected sockets. I wouldn't mind having the power of e.g 32 4x32 FP SIMD units as an addon for my PC. Perhaps the good-old days of separate FPU's are coming back :)

__________
In theory, there is no difference between theory and practice. But, in practice, there is - Jan L.A. van de Snepscheut
MMORPG's...Many Men Online Role Playing Girls - Radagar
"Is Java REALLY slower? Does STL really bloat your exes? Find out with your friendly host, HoHo, and his benchmarking machine!" - Jakub Wasilewski

Thomas Harte
Member #33
April 2000
avatar

Quote:

I disagree with that a bit. Using basically the same instruction set as 25y ago is not very efficient. Sure, it gives really nice backwards compability but efficiency suffers because of that. SIMD stuff is fun and all but I think current CPU's have too little power in those units.

But Intel seem to have been able to attract Apple from the PowerPC realm, which was one of plentiful registers (32 integer, 32 floating point, 32 vector) and a modern (c. 1993) RISC instruction set that was designed from day one with 64bit operation in mind so they must have something going for them. Indpendent tests put the first Intel iMac around 15% faster than the G5 it replaces (for universal binaries; some tasks such as MP3 encoding are marginally slower but not by much) and the MacBook Pro is probably at least twice as fast as the old G4 Powerbook.

I think the main thing of note is that some claim the Intel can't decode full resolution HDTV video without frameskipping whereas the old G5 iMac definitely could. Others say there isn't a noticeable difference.

HoHo
Member #4,534
April 2004
avatar

Quote:

But Intel seem to have been able to attract Apple from the PowerPC realm, which was one of plentiful registers (32 integer, 32 floating point, 32 vector) and a modern (c. 1993) RISC instruction set that was designed from day one with 64bit operation in mind so they must have something going for them.

Wasn't the problem in IBM that couldn't achieve what it had promised? Something like dualcore 2.5GHz G5 in laptop without liquid cooling?
;)

__________
In theory, there is no difference between theory and practice. But, in practice, there is - Jan L.A. van de Snepscheut
MMORPG's...Many Men Online Role Playing Girls - Radagar
"Is Java REALLY slower? Does STL really bloat your exes? Find out with your friendly host, HoHo, and his benchmarking machine!" - Jakub Wasilewski

Shawn Hargreaves
The Progenitor
April 2000
avatar

Yep. Which I think says a lot about the "but Intel hardware is an inefficient design" argument.

Sure, the x86 instruction set isn't the greatest ever. But in the real world where engineering realities count for a lot more than theoretical aesthetics, chip designers sure do seem to be having a lot of success manufacturing them!

Thomas Harte
Member #33
April 2000
avatar

Quote:

Wasn't the problem in IBM that couldn't achieve what it had promised? Something like dualcore 2.5GHz G5 in laptop without liquid cooling?

For Apple they seemed unable to produce a 3Ghz G5 part for the desktop or any G5 with sufficient heat characteristics to be put in a laptop. Similarly, Motorola/Freescale (who were supplying the G4s) seemed to have some sort of mental block on FSBs above 167 Mhz.

The Intel chip does reputedly cost about three times as much as the G5 it replaces, but Apple have been clear that its all about the roadmap and about performance per watt. If the G5 isn't going anywhere except for increasingly specialised console designs (PowerPC relatives will run all three of the next generation consoles) then I guess the switch is very sensible.

Reading between the lines, I think there may have been a sour relationship between Apple and IBM. Back when they (+ Motorola) designed the PowerPC and the reference platform IBM intended to put out a Mac compatible but in the end Apple wouldn't let them license the Mac OS on the terms they wanted as IBM wanted to offer it as an optional install while Apple would only supply it to be a default install. <Insert your own comment on Apple's stupidity here>. Then Altivec seems to have been added to the G4 as a collusion between Apple and Motorola, leaving IBM to accept Apple customisations to their G5 design (which doesn't inherently have Altivec) if they wanted Apple's business.

On the other hand, I think Apple have realised what a lot of nerdy tech people have long wanted not to admit - nobody cares about machine internals. They care about the web, their email and Office. A few also care about syncing their personal music player.

I also think Intel have a way of constantly surprising the industry. Before the Pentium it was assumed that the old RISC/Intel-style architecture was dead, hence projects like FX!32 (allowing you to run your old fashioned Intel binaries under the Alpha version of Windows NT) and Apple's switch to PowerPC. With the Core Solo/Duo Intel seem to have caught up with AMD again in terms of work per cycle, and it seems likely Intel will go quad core first. No doubt it'll soon turn into a razor blade type scenario. Gillette: "Fuck everything, we're doing five blades!"

More registers would be nice, but when cache or hyper threading can achieve the same thing in terms of silicon utilisation and we're all leaving the compiler to sort it out anyway then what difference does it make? You just end up with whoever can spend most on R&D managing to ship the best processors and at the minute that is the x86 realm.

On an aside, rereading my earlier comments it sounds a bit like I'm a rabid anti-Microsoft loony. In fact I use Office frequently and do not even have OpenOffice (YUCK - X11) or NeoOffice/J installed. I rarely, if ever, mention to my real life friends that I use an Apple unless it directly comes up for some reason. I've only switched to Apple recently and that's because right now they seem to be firing on all cylinders. I wouldn't have bought a Classic OS machine over a Windows 95/98/2000 box if you'd paid me and if Microsoft leapfrog Apple then I'll happily switch back. Sure, I'd like Apple marketshare to grow but that's just because I'd like them to be more secure and to keep up the competition that benefits all of us. Realistically the iPod vertical integration model isn't going to be a cash cow for very much longer and I'd hate to see them go.

 1   2 


Go to: