Tuesday, August 19, 2008

Physx gets integrated into leading engine

Nvidia gains further support

EMERGENT HAS announced plans to further partner with Nvidia on the company's Gamebryo development platform.

In an announcement which can be seen as a sizeable win for Nvidia, Emergent will integrate Physx technology into all upcoming versions of the 'industry-leading' Gamebryo.

The next release of Emergent's Gamebryo, is scheduled for this Autumn and thus will ship with the Nvidia Physx engine directly integrated into the platform.

Gamebryo has been optimised for development on the Playstation 3, Xbox 360, Wii and PC .

It was most recently selected as the development platform for the console titles Civilization Revolution by Firaxis and Splatterhouse by BottleRocket.

Gamebryo is also being used by EA-Mythic for its upcoming game, Warhammer Online: Age of Reckoning as well as Divinity 2: Ego Draconis from Larian Studios.

Emergent has stated that to date, Gamebryo has been used in more than 200 shipped games titles ranging from massively multiplayer online games, high-end retail games across multiple genres, and casual games.

It makes sense for Gamebryo to use Physx as an underpinning technology - Physx can work across all major gaming platforms, including the above consoles, and the PC, and can be accelerated by both the CPU and any CUDA general purpose parallel computing processor - and obviously Nvidia's own Geforce GPUs

By Dean Pullen: at
http://www.theinquirer.net/

Saturday, August 16, 2008

Nvidia drivers for OpenGL 3.0

NVIDIA has released a new set of beta drivers for developers with support for the OpenGL 3.0 API and GLSL 1.30 shading language.

Just two days after the Khronos Group officially released the OpenGL 3.0 specifications, NVIDIA has deployed its first round of beta drivers (version 177.89) with support for the new API. By default, the new features are disabled and must be activated using NVIDIA’s NVemulate utility. In order to activate OpenGL 3.0 and GLSL 1.30 functionality, you must be using a GeForce 8 series or higher or one of several Quadro FX cards. Cards from both desktop and notebook lines are supported.

The drivers are available for both 32-bit and 64-bit versions of Windows XP and Windows Vista and will integrate into the standard ForceWare driver releases following the SIGGRAPH 2008 conference as part of NVIDIA’s Big Bang II.

How RAM works on a dual-GPU card

Current dual-gpu cards work by using load balancing between two gpu cores on a video card. The current designs on these dual-gpu cards don't have shared memory so each gpu core uses the required amount of vram per gpu core to render and load balance two scenes together seamlessly. Shared memory wouldn't actually give you any more ram, but rather better management of how the gpu core's could handle the provided available on board ram.

With a shared memory dual-gpu you could do more things with the same amount of on board ram essentially. For example card with 1GB shared vram and 1 gpu core could be using 256vram for one application while the other core was using 768vram for another application. Now with a 2x512MB even though the card has 1GB worth of total ram on board your restricted to 512MB per core so that same situation wouldn't be possible. Shared memory is more economical and flexible, but more complex to design which is likely the leading contributing factor as to why it isn't yet used in dual gpu cards. I hope that helps answer your question.

Friday, August 15, 2008

We can ray trace too, says Nvidia

HIGH-END GAMING PCs will soon have raw power to handle photo-realistic graphics fluidly and in real time. Even Nvidia powered ones.

Intel recently boasted that its promised Larrabee architecture could perform real-time ray tracing. Nvidia says it is already possible under existing technology designs.

Nvidia demonstrated at Siggraph in Los Angeles that traditional GPUs could still handle real-time ray tracing using CUDA.

What we are witnessing is ‘the world's first fully interactive GPU-based ray tracer’, said Nvidia at the show. They achieved it using a Quadro Plex 2100 D4 Visual Computing System (VCS) with four Quadro GPUs, each equipped with a gigabyte of memory.

A bit out of my budget, but there you go.

Nvidia says, ‘the ray tracer shows linear scaling rendering of a highly complex, two-million polygon, anti-aliased automotive styling application.’ True, perhaps, but it’s very much a techie’s demo, at the moment, and we’re a long way from seeing a mainstream game using this technology. And a lot can happen in the interim.

Still, fair play to Nvidia, which claims the polished car demo runs at 30fps at 1,920 x 1,080. It also included ‘an image-based lighting paint shader, ray traced shadows, and reflections and refractions’ at three bounces.

Nvidia also demonstrated the demo running at 2,560 x 1,600 though didn't disclose frame rates at that resolution.

Ray tracing is used to generate images by tracing the rays or paths of light as they bounce through and around the objects in a scene. When done right, it can produce photorealistic imagery because shadows are reproduced correctly. But the algorithms needed are complex and processor intensive.

Intel's Daniel Pohl has demo-ed customised versions of Quake 3 and Quake 4 running on Intel hardware that use ray tracing with impressive results. ATI has ray traced demos running on Radeon HD 4800 series hardware at their Cinema 2.0 event.

Now Nvidia is up to speed. Or is it?


This was a article written by By Nick Booth at

http://www.theinquirer.net/