[DirectX 10]Is anyone following it?
Archon

Is anyone interested in DirectX 10? I came across the beta for it while looking for the DirectX 9 SDK.

It's (will be) Vista only, but I've read that it's been redesigned (as opposed to simply 'improved DirectX'.

Secondly, I've read in the DirectX SDK, that you cannot use DirectX in conjunction with a non-Windows platform - is that a void statement (due to anti-trust laws or something)?

Thomas Fjellstrom

I'll bet Shawn has known about it for a while ;) I'll just bet that DX10 is just XNA's new framework.

Archon
Quote:

I'll just bet that DX10 is just XNA's new framework.

Possible - I don't know too much about XNA (CP just mentioned it to me).

Is DirectX 9 and below going to be scrapped with future Windows versions?

Goalie Ca

The whole reason to use a library is to make things simpler and easy. Opengl does that to some extent, sdl does it, allegro does it, and vtk does it. Directx might as well be obfuscated assembly. Not the easiest thing to learn or use quickly.

There's a few critical points though that it offers, but honestly, i haven't a clue if there's any useful changes, even in 3D land. Maybe someone could enlighten me as to why i'd want to go out and learn it!? especially version 10 vs 9 and 8.

If i were to draw the analogue, i would say that as many people care about 10.0 as they do about vista and probably for the same reasons.

Archon

The reason I'm interested in DirectX 9 right now is for a part of a project for university. I want to make a wrapper (like I did for DirectX 7 and 8 in VB6).

And DirectX 7 and 8 helped me understand the concepts of Allegro and OpenGL respectively.

Bob
Quote:

Is DirectX 9 and below going to be scrapped with future Windows versions?

No, they'll be converted to DX10 in real-time, by the runtime provided by Microsoft.

Thomas Fjellstrom

Yay, more layers, more indirection, and more emulation! Just what the Wintel empire needs to make more people upgrade more times!

Bob

Oddly enough, DX9 and under are likely to get faster by being layered on top of DX10 than not.
See Wloka's law for why DX9 (and under) is much slower than you think.

DX10 fixes a large portion of this (not quite reaching OpenGL yet, though), enough to make most DX9 applications run faster when emulated on top of DX10.

Archon
Quote:

not quite reaching OpenGL yet, though

I'd thought that Direct3D would be more feature rich than OpenGL...

Bob
Quote:

I'd thought that Direct3D would be more feature rich than OpenGL...

I was talking about performance, not features. See the posts above.

Shawn Hargreaves

DX10 is going to coexist with DX9, so games can pick either API.

Why?

Because DX10 has a really aggressive set of requirements. Not only does it require Vista, it also needs a DX10 capable GPU with shader 4.0, virtual memory management, geometry shaders, etc. DX10 doesn't have any GPU caps at all: for a card to support any of DX10, it has to support all of it.

Hence, DX9 also has to stay around in case people want to make games that run on older hardware.

Realistically, I suspect most games will stick with DX9 for the next few years. But eventually everyone will have DX10 cards, and the great thing there is, finally games will have a totally fixed feature set that they can assume is always available on every machine. This should make life a lot easier for game devs 5 years from now.

In terms of significant new features, the biggest thing in DX10 is actually behind the scenes: the driver model has been totally redesigned to give some big performance improvements. They've moved to a model where more things have to be bound up front, allowing drivers to do aggressive optimisation at resource creation time and hopefully not have to do any fixup work at all during the actual rendering. These changes affect many parts of the API, for instance renderstates are now only set via a handful of state objects (no more SetRenderState), but the overall goal is robustness and performance rather than actual new features.

Also, DX10 introduces virtual memory on the GPU. This isn't so immediately interesting for games, other than it means alt+tab will finally work automatically, but it has big implications for multiple windowed applications cooperating to share the GPU (kind of important when the OS itself wants to use GPU effects for menus and so on).

Sirocco

Thanks Bob, Shawn. That clears up a lot of nagging concerns I was having moving into the Vista era.

Matt Smith

"This should make life a lot easier for game devs 5 years from now." this carries the strong assumption that nothing amazing will appear on new cards. 5 years is an awful long time in computers.

Shawn Hargreaves

Of course hardware will go on improving and new capabilities will arrive: nothing is ever going to stop that.

The benefit of DX10 will be a constant lowest common denominator sitting underneath the newer features.

Once we someday pass the barrier of saying "my game requires a DX10 card", that massively reduces the compatibility testing burden at the low end. As long as your game contains a code path that runs on a stock DX10 card, you can be sure that will always work on every machine in the world.

Then you can go off and use more advanced higher level features where they are supported, but where they are not, you always have a known feature set to fall back on.

That's a big big timesaving compared to the world today, where there are over 200 independent capability bits, plus a huge number of card oddities that can't even be expressed in the caps!

Plus, the DX10 feature set is "good enough" for a ridiculously wide range of rendering techniques. There is a very strong case to be made that now we have a fully general programmable GPU, most of the future advances will be on performance (longer shaders that run faster with bigger textures and better AA) rather than actual new capabilities. It has become more like CPU improvements: they get faster all the time, but you still program them with the same basic set of instructions.

Also, this lowest common denominator has huge implications for things other than high end 3D games. Today, how many UI apps use D3D to do alpha blending? Almost none, because that isn't consistently available and they didn't want to take on the compatibility testing burden. Imagine a universe where every silly little web card game can confidently rely on a decent set of GPU capabilities...

Jakub Wasilewski

Sure, it'd be cool and all, but GPU developers tend to quickly diverge from the lowest common denominator.

Look at OpenGL. Consequent versions of the API support more and more functionality that could become the LCD you mention, but instead we have the whole extension system, which is basically what you claim DX10 will avoid.

Once 90% of cards in people's computers supports DX10, the GPU industry will be far ahead of that. And no self-respecting commercial game studio will release a game that doesn't use all the capabilities of your card, regardless of the additional testing costs. A game that is not as beautiful as its competitors just doesn't sell as well - that is the common viewpoint held in the gaming industry.

So, I don't think DX10 will solve those problems. Instead we will get multiple extensions (vendor specific, standard or even Microsoft-designed) built on top of DX10, which will be implemented or not on the GPUs, and sometimes even misimplemented.

I'm not saying that DX10 is bad. I'm just not very eager to believe that it will become the remedy for GPU compatibility problems.

[edit]

Shawn said:

It has become more like CPU improvements: they get faster all the time, but you still program them with the same basic set of instructions.

Sure, the basic instructions haven't changed, but almost every line introduced it's own very useful extensions. Examples include conditional moves, MMX, SSE, 3dNow. Of course not every application will really benefit greatly from using those, but there are some that do, and aren't compiled for the "lowest common denominator" 386 instruction set.

Shawn Hargreaves

Sure, at the high end extensions will be important and people will use them.

But the high end isn't where the compatibility pain happens. It is the older hardware, the cards where your game doesn't look any good at all, but still has to run in some shape or form, that cause the huge testing burden. A modern 3D game spends probably 90% of its testing budget supporting cards from 3 or more years ago. Those represent a tiny percentage of the eventual sales, but a ridiculous amount of the dev cost. And you can't just not support them, because people are stupid and will still buy your game even if the box says it doesn't support your card, then they phone up tech support when it doesn't work, and every tech support call costs the publisher the profit from 5 to 10 sales! So you have to waste loads of time making it work on these crappy old cards.

That's where DX10 helps. The high end is still for you to solve, but that's ok: this is where game engine programmers can differentiate and try to sell their product, so it's a good place to spend their time. The point is that DX10 (will someday) give a constant platform to remove all the irritating crap around the low end support.

You should also consider the huge number of apps which aren't games and have no interest in chasing the high end. Today they use things like GDI, Flash, or DirectDraw, because those are constant and universally available. In the future they will be able to use DX10. I think that's pretty significant: it shifts the universal lowest common denominator of graphics rendering, which is currently just a 2D framebuffer containing pixels, into a world of universal hardware rendering, high quality filtering, blending, and sophisticated programmable shaders.

relpatseht

From what I've read in this thread, it seems that DirectX 10 will just be raising the current lowest common denominator to a much higher point, and though I know graphics can only get so good, I still wouldn't say we have come anywhere near graphical perfection. Basically, this means that in a few years, DirectX 10 will basically be what OpenGL 1.1 is now (I don't know what the DirectX equivalent would be), something that all cards fully support, but something that is never used without additional extensions for any modern game. Of course, by then, Microsoft will probably be on about DirectX 15 and as long as they continue what they started with DirectX 10, by which I mean continually raise the lowest common denominator, this wouldn't be much of a problem, unless you consider it troublesome to buy a new, top-of-the-line video card with every release of DirectX. Furthermore, stupid people will still buy games that say on the box that they require DirectX 10 when their card doesn't support it, and they will then still contact technical support. The only difference will be that they will ask tech support why their card doesn't support DirectX 10, what DirectX 10 is, and what they have to do to get DirectX 10 when the big flashing box appears that says that their computer does not support DirectX 10, rather than asking why the game doesn't work. Your average person is completely ignorant to computers, so this will undoubtedly happen.

[edit]

Don't get me wrong, it definitely is a good thing that soon all modern games will require DirectX 10, and thus require everyone to have what would currently be called a decent video card, I just don't think that requiring full support or none at all for DirectX 10 will be a solution for anything for more than a few years.

Shawn Hargreaves

I half agree with you, but also half don't.

There has been a huge sea change in GPU hardware over the last year. Finally, programmability has come far enough that there aren't really any nasty fixed function warts left: it's all just shader microcode. That means future improvements aren't going to be new features, they're just going to be faster ways of running the existing feature set (which allows people to write longer shaders and use more advanced techniques, but ultimately it is all just HLSL shader code).

Think Turing complete: once you have that, it's all just implementation details to make things run faster.

(I'm simplifying grossly here, because there are still way too many fixed function warts like limited numbers of interpolators, or the framebuffer blend silicon. But the theory is broadly true. And you can already see this in action. Most modern games that use ps 3.0 aren't actually caring about any of the new ps 3.0 features: all they want is ps 2.0 capabilities with longer shaders and more fillrate...)

Marcello
Quote:

But the high end isn't where the compatibility pain happens. It is the older hardware, the cards where your game doesn't look any good at all, but still has to run in some shape or form, that cause the huge testing burden. A modern 3D game spends probably 90% of its testing budget supporting cards from 3 or more years ago. Those represent a tiny percentage of the eventual sales, but a ridiculous amount of the dev cost. And you can't just not support them, because people are stupid and will still buy your game even if the box says it doesn't support your card, then they phone up tech support when it doesn't work, and every tech support call costs the publisher the profit from 5 to 10 sales! So you have to waste loads of time making it work on these crappy old cards.

If I understand what you're saying correctly, the exact same thing will still happen. The box will say "Needs a DX10 card" which is no different from saying "doesn't work with your card" or "requires opengl 2.0" or whatever. People are still going to try to play the game on a card that isn't DX10 compatible and they are still going to call up tech support when it doesn't run.

I'm failing to see how DX10 will change that.

Marcello

relpatseht

Yes, there have been massive changes in GPU hardware over the last few years, and people are currently mostly concerned with bigger, better shaders, but I am still quite hesitant to say there won't be even bigger changes in the years to come. If there is one thing I have been taught the hard way too many times its that as soon as I say something will not be done or cannot be done, someone does it anyway then punches me in the face with it.

Shawn Hargreaves

One day everyone will have DX10.

The trouble is today there is no such simple thing you can say. Publishers try things like "Requires DX8", which is kind of reasonable: the number of people who don't have a DX8 card today is low enough to be commercially acceptable.

But DX8 isn't a clear enough requirement. You end up with things like "Requires DX8 with pixel shader 1.1, or if not that, at least 3 multitexture blends supported, and must have hardware skinning for at least 2 bones, and must support D3DCAPSNON_POW2CONDITIONAL, and UBYTE vertex formats".

That is a crappy consumer experience, and causes loads of pain for the poor game devs in the trenches.

relpatseht

Yeah, I agree that the lowest common denominator needs to be increased, but DirectX 10 is only a temporary solution. Eventually, DirectX 10 will be what OpenGL 1.1 is now (once again, I don't know the DirectX equivalent), everyone will support it, but all games will require extensions, and say things on the box such as, "Requires DirectX 10 with SUPER-MEGA-HYPER-ULTRA-REAL-GFX-EXTENSION version 2.5."

Shawn Hargreaves

Sure, but then 5 or 10 years from now there can be a DX12 or DX15 or whatever that provides a new common base.

The point is a philosophical change: rather than just exposing whatever the hardware caps are and having bits to report this, for the first time DX is taking a strong stance on standardising a fixed set of functionality that all hardware can be expected to support.

I'm personally very excited about that.

The last time we had this level of universal standardisation was when GDI mandated that all screens had to consist of pixels, and it was no longer ok to use a text mode display!

Marcello

Although it's not all that standard if you're limiting yourself to just Windows Vista. I'd much rather have a cross-platform library. Otherwise I'm better off developing for a console, where none of this is an issue anyway.

What I want to know is why they didn't call it DirectXX.

Marcello

relpatseht
Quote:

Sure, but then 5 or 10 years from now there can be a DX12 or DX15 or whatever that provides a new common base.

The problem with every release of DirectX providing a new common base is that no one wants to go out and buy a new top-of-the-line, $500 video card every year with the new release of DirectX.

Quote:

The last time we had this level of universal standardisation was when GDI mandated that all screens had to consist of pixels, and it was no longer ok to use a text mode display!

I bet there were still people who had a text mode display for some years afterward. DirectX 10 isn't providing any new standardization, it is simply forcing the current standard (whatever the DirectX equivalent of OpenGL 2.0 is) upon consumers.

Once again, this really isn't a bad thing, the lowest common denominator definitely needs to be raised, I just don't think that keeping up the practice of providing a new common base with every release of DirectX is fair to the consumer, and not providing a new common base with every release of DirectX means that none of the current problems will be solved for very long.

Thomas Harte
Quote:

Also, DX10 introduces virtual memory on the GPU. This isn't so immediately interesting for games, other than it means alt+tab will finally work automatically, but it has big implications for multiple windowed applications cooperating to share the GPU (kind of important when the OS itself wants to use GPU effects for menus and so on).

DirectX is a really stupid place to implement VRAM virtualisation if Microsoft are serious about keeping up with everyone else's GUI advancements after their Vista catch up. Is there a reason, other than the apparent incompetence of management over OS development (source)?

OS X already does virtualisation for all OS systems. I guess the upcoming GL based X servers & window managers end up doing much the same due to GL's hands off VRAM management.

Bob
Quote:

The problem with every release of DirectX providing a new common base is that no one wants to go out and buy a new top-of-the-line, $500 video card every year with the new release of DirectX.

You don't need a $500 video card to get access to newer versions of DirectX. Low end GPUs will implement D3D10 (or 11, etc) at reduced speed. All the required features will be there, though.

relpatseht

Even the lower end cards will probably cost around $150 when they first come out. Of course, they will go down to about $70 after a few months. Still, I don't think it is a good thing to demand hardware upgrades with every release.

Thomas Fjellstrom
Quote:

One day everyone will have DX10.

Highly doubtfull. My dad doesnt even have DX 8 afaik. His fiance's has 9.0c... but its not nearly capable of much of anything being a 450 with 128MB ram.

People have very little reason to upgrade again to a new OS version for $300+. Except that MS is forcing the issue yet again, by making APIs require specific versions of windows updates. Isn't that a little unethical?

And I have even less reason to buy a copy of Vista. I haven't ever owned a real copy of any windows version, the only legal access I had at one point was a lame NEC rescue CD that would only work on the machine it came with (unethical again I think, the software was payed for, should have been alowed to use it on my next computer, but nooo, MS and OEMs like to rip off thier customers).

Basically, I have never owned any MS software, and I'm not about to start. I've been treated like a dirty criminal by MS from day one. So I use alternative OS instead, and I actually prefer developing in Unix (like) OSs much more than any windows version you care to throw at me. The software costs WAY too much, and without a 1000+ dollar licence for MSVC you don't get any really useful features for developing real software in windows.

At this point, even if I do manage to start developing some commercial software, I'll only port it to windows after its done, if it makes sense.

Goalie Ca

everyone will have 10.0 when the new hotness is 12.9

Richard Phipps

Ironically, if writing shaders becomes ever more important we could see a return of the demo crew style 'hacks'. By this I mean that rather than focussing on how many polygons can be rendered, the trick is to do clever writing with the shaders to create cool new effects. ;)

Jakub Wasilewski
Quote:

That's where DX10 helps. The high end is still for you to solve, but that's ok: this is where game engine programmers can differentiate and try to sell their product, so it's a good place to spend their time. The point is that DX10 (will someday) give a constant platform to remove all the irritating crap around the low end support.

OK, I think we can agree on that point. This will only become viable if the GPU manufacturers manage to create hardware exactly up to the specification, and if the DX10 specification itself will be strict and verbose enough, but it's a good chance.

Shawn Hargreaves

The thing I think a lot of you are missing is that the lowest common denominator thing isn't really about cutting edge 3D games.

For high end games, sure, hardware improves all the time, and new DX releases are needed to keep up with this. That's one universe.

But there is another universe. That is the universe my parents live in, where they bought a computer 3 years ago, will probably replace it in 2 or 3 more years, and don't have a clue what kind of GPU they have (or even what a GPU is).

Most people on allegro.cc live in the first universe, but the vast majority of people on the planet are part of the second. These people don't play Doom or Battlefield. They spend most of their time using a web browser, or writing email, or using Word or Excel, or playing games like Freecell or Poker or Bejeweled. There are a lot of these people out there, and they play a lot of games: way way way more hours go into Freecell than all the Doom and Battlefield players combined!

And yet today, the graphics industry has nothing to offer this mainstream market. Yearly rev cycles are a nightmare for games like Freecell! So today, Freecell doesn't use the GPU at all. It just sticks with a lowest common denominator that was established sometime back in the mid 80's. And the vast majority of computer programs in the world are the same.

So the problem isn't really one of yearly rev cycles, it is how do we take all these apps that haven't advanced their graphics technology since 1985, and convince them to start using 2005 era technology?

Personally I think that is a very interesting problem to try to solve (it poses a bunch of really hard issues that are very different to the sort of things high end game developers typically worry about), and one that can have a big impact on changing the way software looks and feels.

Mainstream software just doesn't move very fast.

Once it was text mode.
Then it became raster graphics.
(someday) Then DX10 made the GPU accessible to everyone.

These big shifts only happen once every decade or more - it is a whole different scale of thinking to the high end game market.

(note: the idea isn't that apps would actually code directly to DX10. DX is a low level API that just exposes the GPU caps: other higher level abstractions will be used to write the actual UI code. For instance Flash could easily be accelerated using DX10 hardware. And Microsoft is working on the WPF layer (formerly known as Avalon). I'm sure there will be many others interested in taking advantage of richer rendering capabilities...)

Quote:

Even the lower end cards will probably cost around $150 when they first come out. Of course, they will go down to about $70 after a few months.

One thing I've learned about MS is that they think long term. The goal isn't the $70 cards after a few months: it is the situation a few years away when even the crappiest integrated motherboard video hardware will support DX10 shaders. That's really not as far fetched as it may currently seem, but it is a much longer lead plan than just affecting the product cycle for the next holiday season.

Quote:

Ironically, if writing shaders becomes ever more important we could see a return of the demo crew style 'hacks'. By this I mean that rather than focussing on how many polygons can be rendered, the trick is to do clever writing with the shaders to create cool new effects. ;)

That is already happening to a major extent. It's been two or three years since I've seen anyone seriously worrying about how to maximize polygon throughput, and most games aren't using any more polygons now than they were a couple of years ago. The extra resources are going into longer shaders and more passes, drawing the same polygons in a better way.

Quote:

OK, I think we can agree on that point. This will only become viable if the GPU manufacturers manage to create hardware exactly up to the specification, and if the DX10 specification itself will be strict and verbose enough, but it's a good chance.

Believe me, the spec is very verbose :-) Not the API spec (that's written to be readable by normal users) but the hardware/driver interface spec is about 15x longer than that for any previous DX version!

Bob
Quote:

Believe me, the spec is very verbose :-) Not the API spec (that's written to be readable by normal users) but the hardware/driver interface spec is about 15x longer than that for any previous DX version!

Indeed. The main reason for that is some people pushing Microsoft to actually write a spec instead of having said people try to figure out what the refrast was trying to achieve.

Carrus85
Quote:

One day everyone will have DX10.

Heh, hardly. For example, I'm currently stuck with a GeForce 440 Go MX 64MB Card, with NO POSSIBLE WAY OF UPGRADING. (Hooray for laptop video cards! If you can upgrade these, please let me know, although I'm pretty sure you can't without either invalidating your warranty or doing some huge degree of hardware solder/desoldering.) Basically, I'm stuck at a DirectX 7/8 level. No shader support whatsoever. Which is kinda sad, considering it is a 3400+ AMD 64bit processor...

HoHo
Quote:

NO POSSIBLE WAY OF UPGRADING.

Will you use the same laptop in five years?

Murat AYIK

People criticise MACs and not laptops! Is it too hard to make something changable which looks like a BIOS or those old VGA-RAMs? Anyway, the strategy behind DX10 is very nice, I hope it influences the hardware designers.

Jakub Wasilewski
Quote:

Not the API spec (that's written to be readable by normal users) but the hardware/driver interface spec is about 15x longer than that for any previous DX version!

That's great :). Where there is no leeway in the specification, there is no room for two different interpretations by GPU manufacturers, and everyone is happy.

Bob
Quote:

and everyone is happy.

... except for users, OEMs and GPU manufacturer, of course. Some leeway is good. It promotes innovation. It means that new image quality improvement features can be applied to old applications / games. Plus, no one can write iron-clad specs, not even Microsoft. And, you don't want to design (or pay for) hardware to be bug for bug compatible with DX10 forever.

Jakub Wasilewski

Yeah, some leeway is good, but not in everything. We don't want creativity in the "trying to figure out what the author of the specification meant" way. The places where improvements should be allowed should be taken into account when writing the specification. We don't want "improvements" that just mean that every single card implements something differently, and something that looks good on 90% of cards looks like crap on the remaining 10% that decided to understand the specification otherwise.

Also "bug for bug" compatibility is not what I'd like to see. There is a specification. If a driver does not adhere to it, it's the driver manufacturer's fault and should be corrected. If DX itself does something not as it is stated in the specification, it should be corrected.

Of course, reality doesn't always pose us with such a clear-cut situation, but I will still claim that a better prepared, longer specification will allow for better compatibility amongst GPUs and won't halt innovation.

However, I see how I'm not the most experienced or informed on the topic amongst us (seeing that you work for nVidia, and Shawn works on DX), so my opinion might not be very valuable... but I know how painful it is to struggle against incompatibilities between different vendors implementations of the same thing (IE vs the rest of the world, various GPUs with OpenGL).

HoHo
Quote:

Some leeway is good. It promotes innovation. It means that new image quality improvement features can be applied to old applications / games.

That's why OpenGL and it's extensions are good for :)

Shawn Hargreaves

Just to clarify: I don't actually work on DX, I'm just pretty involved with the people who do.

I would say that leeway to extend a spec is good, but leeway to not implement parts of it, or to implement parts of it differently, is very bad.

Example: it's cool that Intel can add SSE2 to their instruction set. But once things have been added and standardized, they should be left alone. For instance it would be a nightmare if they suddenly decided to drop the CMOV instruction! Or even worse, to change the way it sets the carry flag.

A healthy platform is one where you have a nice solid base of stuff that you know will be there and you know will always work the same way. Then if you want to use extra features, you can check for them, but 99% of your code doesn't care about those new features so it can be kept nice and clean and robust.

Goalie Ca

I have to agree with Shawn. A good well though out standard makes everyone happy. Posix is an excellent example of this. Same with the building code, electrical code, etc. Then of course there are accepted practices which all engineers are held accountable to.

One thing microsoft has done right was legacy compatibility. They've run into so many problems though because of the monolithic design and the hacked interfaces. It's like they "use case" designed everything. In the programs there's literally a button for everything and a function for everything. Especially in the case of .NET, if its not in winforms good luck trying to get it working. .NET is a classic example of microsoft standards. They didn't think anything through, and now they're left to support it for a gazillion years while they get ready to release a newer/bigger/better library in the meanwhile.

A cleaner, more modular design is far more future proof. As an engineering student, i find it hard to believe that people can't sit down and come up with a good standard. The engineers most certainly knowing what's coming up. I would have to pin the problem though on management because investors and companies only ever look short term. Not many people think of long term and sustainability.

edit: I also forgot to mention amd and intel. They've done an excellent job so far. It's also nice to see amd64 take some advantages of the completely new mode. One thing that concerns me about apple is that they should have gone straight to 64-bit intel chips so they wouldn't have to support 32/64-bit ppc's, 32/64-bit intels. Luckily though, Next was well though out and come with universal binary capabilities among many other things. Next was really, an engineering marvel from an OS point of view. So much remains unchanged from all that time ago.

Thomas Harte
Goalie Ca said:

One thing microsoft has done right was legacy compatibility. They've run into so many problems though because of the monolithic design and the hacked interfaces.

Yes, I think they missed a golden opportunity with Windows 95. There may not have been time to come up with a clean new codebase, but as there was going to be an API break whatever they did they should have taken the opportunity to make it a much more severe one.

In a way I prefer the Apple Classic to OS X transition - PowerPC OS X had Classic compatibility and to an Classic apps can be run on PowerPC OS X, even if they don't interact with the new OS in all the expected ways, but the process isn't exactly speedy. OS X meanwhile contains some completely new APIs, some "this is what we learnt with NextStep" APIs and a trimmed and consolidated version of the old Classic APIs. When everyone knew OS X was coming they could chose to stick to the consolidated libraries and release binaries that worked completely natively in both Classic and OS X. After OS X came out, Classic remained the OS supplied as the default boot on all new hardware for a couple of years, until OS X native versions of all the major apps were available and most of the major OS X bugs and issues had been remedied.

That said, as Apple control both the hardware and the software and really only have to worry about keeping Adobe (for Photoshop) and Microsoft (for Office) abreast of new OS developments, they can very easily adopt a tiered transition like this whereas Microsoft almost certainly couldn't. They need big bang launches of new OSs to properly control how hardware and software are bundled.

Shawn Hargreaves said:

So the problem isn't really one of yearly rev cycles, it is how do we take all these apps that haven't advanced their graphics technology since 1985, and convince them to start using 2005 era technology?

Maybe persuade them to port to Mac OS?

Shawn Hargreaves said:

One thing I've learned about MS is that they think long term.

Indeed - that's why their rendering technology is still based on 1985 paradigms.

HoHo said:

Will you use the same laptop in five years?

I'm now using one that is 53 months old and says (c) 2001 on the base. I'm not "still" using it though because I've only had it a year and a bit, replacing my slightly older desktop.

Murat AYIK said:

People criticise MACs and not laptops! Is it too hard to make something changable which looks like a BIOS or those old VGA-RAMs?

Intel promote Extensible Firmware Interface which is a substantial step up from the olde BIOS, but doesn't do anything like a full 3d API. It does mean you can dump VGA compatibility (and 8086 real mode) though. Microsoft were to support EFI with Vista but have suddenly decided they aren't going to.

Prior to that, Sun had invented Open Firmware - also used by IBM and Apple/PowerPC - which was processor neutral and took the mini firmware drivers as a p-code based on Forth. That also worked in a substantially smarter way than the BIOS but I don't think was ever implemented by anyone alongside x86 processors.

HoHo
Quote:

They've done an excellent job so far.

I disagree with that a bit. Using basically the same instruction set as 25y ago is not very efficient. Sure, it gives really nice backwards compability but efficiency suffers because of that. SIMD stuff is fun and all but I think current CPU's have too little power in those units.

Of cource with multicore architectures it gets a little simpler. E.g they don't have to use so many transistors for branch predictors, just cram another core into the package and net win will probably be greater.

Also it's too bad AMD only doubled the register counts when it created the 64bit aarchitecture. I'm not sure how expensive it is to quadruple it but it surely would have made compilers job way easier if they had done it :)

One possibly interesting future development might be AMD new plans of putting CPU's and random chips in directly connected sockets. I wouldn't mind having the power of e.g 32 4x32 FP SIMD units as an addon for my PC. Perhaps the good-old days of separate FPU's are coming back :)

Thomas Harte
Quote:

I disagree with that a bit. Using basically the same instruction set as 25y ago is not very efficient. Sure, it gives really nice backwards compability but efficiency suffers because of that. SIMD stuff is fun and all but I think current CPU's have too little power in those units.

But Intel seem to have been able to attract Apple from the PowerPC realm, which was one of plentiful registers (32 integer, 32 floating point, 32 vector) and a modern (c. 1993) RISC instruction set that was designed from day one with 64bit operation in mind so they must have something going for them. Indpendent tests put the first Intel iMac around 15% faster than the G5 it replaces (for universal binaries; some tasks such as MP3 encoding are marginally slower but not by much) and the MacBook Pro is probably at least twice as fast as the old G4 Powerbook.

I think the main thing of note is that some claim the Intel can't decode full resolution HDTV video without frameskipping whereas the old G5 iMac definitely could. Others say there isn't a noticeable difference.

HoHo
Quote:

But Intel seem to have been able to attract Apple from the PowerPC realm, which was one of plentiful registers (32 integer, 32 floating point, 32 vector) and a modern (c. 1993) RISC instruction set that was designed from day one with 64bit operation in mind so they must have something going for them.

Wasn't the problem in IBM that couldn't achieve what it had promised? Something like dualcore 2.5GHz G5 in laptop without liquid cooling?
;)

Shawn Hargreaves

Yep. Which I think says a lot about the "but Intel hardware is an inefficient design" argument.

Sure, the x86 instruction set isn't the greatest ever. But in the real world where engineering realities count for a lot more than theoretical aesthetics, chip designers sure do seem to be having a lot of success manufacturing them!

Thomas Harte
Quote:

Wasn't the problem in IBM that couldn't achieve what it had promised? Something like dualcore 2.5GHz G5 in laptop without liquid cooling?

For Apple they seemed unable to produce a 3Ghz G5 part for the desktop or any G5 with sufficient heat characteristics to be put in a laptop. Similarly, Motorola/Freescale (who were supplying the G4s) seemed to have some sort of mental block on FSBs above 167 Mhz.

The Intel chip does reputedly cost about three times as much as the G5 it replaces, but Apple have been clear that its all about the roadmap and about performance per watt. If the G5 isn't going anywhere except for increasingly specialised console designs (PowerPC relatives will run all three of the next generation consoles) then I guess the switch is very sensible.

Reading between the lines, I think there may have been a sour relationship between Apple and IBM. Back when they (+ Motorola) designed the PowerPC and the reference platform IBM intended to put out a Mac compatible but in the end Apple wouldn't let them license the Mac OS on the terms they wanted as IBM wanted to offer it as an optional install while Apple would only supply it to be a default install. <Insert your own comment on Apple's stupidity here>. Then Altivec seems to have been added to the G4 as a collusion between Apple and Motorola, leaving IBM to accept Apple customisations to their G5 design (which doesn't inherently have Altivec) if they wanted Apple's business.

On the other hand, I think Apple have realised what a lot of nerdy tech people have long wanted not to admit - nobody cares about machine internals. They care about the web, their email and Office. A few also care about syncing their personal music player.

I also think Intel have a way of constantly surprising the industry. Before the Pentium it was assumed that the old RISC/Intel-style architecture was dead, hence projects like FX!32 (allowing you to run your old fashioned Intel binaries under the Alpha version of Windows NT) and Apple's switch to PowerPC. With the Core Solo/Duo Intel seem to have caught up with AMD again in terms of work per cycle, and it seems likely Intel will go quad core first. No doubt it'll soon turn into a razor blade type scenario. Gillette: "Fuck everything, we're doing five blades!"

More registers would be nice, but when cache or hyper threading can achieve the same thing in terms of silicon utilisation and we're all leaving the compiler to sort it out anyway then what difference does it make? You just end up with whoever can spend most on R&D managing to ship the best processors and at the minute that is the x86 realm.

On an aside, rereading my earlier comments it sounds a bit like I'm a rabid anti-Microsoft loony. In fact I use Office frequently and do not even have OpenOffice (YUCK - X11) or NeoOffice/J installed. I rarely, if ever, mention to my real life friends that I use an Apple unless it directly comes up for some reason. I've only switched to Apple recently and that's because right now they seem to be firing on all cylinders. I wouldn't have bought a Classic OS machine over a Windows 95/98/2000 box if you'd paid me and if Microsoft leapfrog Apple then I'll happily switch back. Sure, I'd like Apple marketshare to grow but that's just because I'd like them to be more secure and to keep up the competition that benefits all of us. Realistically the iPod vertical integration model isn't going to be a cash cow for very much longer and I'd hate to see them go.

Thread #575715. Printed from Allegro.cc