Sunday, 30 August 2009

The pixel revolution, the future of game graphics according to Edge article

More than a decade ago, old-school game coders urged their peers to reject the one true way of a fixed graphics pipeline, as embodied by the combination of Microsoft’s emerging DirectX API and the first wave of PC GPUs. The old school lost. The semi-standardisation of the graphics pipeline underwrote a leap forward in the visual quality of PC games and consoles, from Xbox to PlayStation 3.

From the start, however, the sameness that a fixed pipeline imposed on game engines and their output was apparent. In the early days you could usually tell what GPU a PC game was running on simply by the graphical effects, irrespective of the title or developer.

Change began five years ago with the move from fixed-function GPUs to a new generation that enabled semi-programmability through shaders. Game developers embraced the relative freedom and now they want more.

A fantastic article on the future of real-time graphics techniques, with a number of mini interviews with experts of the field, can be found on the online version of the excellent Edge magazine and is well worth a read for anybody wanting to investigate the upcoming trends in real-time game visuals.

http://www.edge-online.com/features/the-pixel-revolution

Saturday, 29 August 2009

Student learning using a virtual 3D lab

Students at a Baltimore County High School this fall will explore the area surrounding Mount St. Helens in a vehicle that can morph from an aircraft to a car to a boat to learn about how the environment has changed since the volcano’s 1980 eruption.

This will all be done without ever leaving their Chesapeake High School classroom as they will be using a three-dimensional Virtual Learning Environment developed by the Johns Hopkins University Applied Physics Laboratory (APL) with the University’s Center for Technology Education.

A coalition that also included Lockheed Martin, Northrop Grumman, and the University of Baltimore is deploying the environment, which was modeled after a state-of-the-art, 3D visualization facility at APL that was used for projects by the Department of Defense and NASA.

The Virtual Learning Environment includes 10 high-definition, 72-inch TV monitors, arranged in two five-screen semicircles that allow students to interact with what they see on screen using a custom-designed digital switch and touch-panel controller. In an adjoining lab, 30 workstations, each outfitted with three interconnected monitors, will display the same environments, allowing lessons to be translated and understood on a team or a student basis.

Thursday, 27 August 2009

Unity launches much-improved iPhone edition

Denmark's game engine group Unity has launched an iPhone edition of its development platform, packaged with many updates and improvements. Unity offers a range of development platforms for mobile devices, as well as browsers and the Wii.

The company calls its newest engine “Unity iPhone 1.5”, and promises that the platform will run up to three times faster than the prior model. The new 1.5 version provides full support for native Objective C and C++ code, with Unity claiming this will open “full access” to the newest series of iPhones.

Unity iPhone 1.5 will support 8-texture shading on the very latest edition of the iPhone but also provide developers with the chance to implement many of the features found on all iPhones, from video-playback to on-screen keyboard support but also, perhaps most interesting of all, access to the smartphone’s GPS and navigational tools. (Location-based games anyone?)

Finally, the engine allows for a faster combining of multiple animations, while animation skinning can be as much as 400 per cent faster. The platform introduces automatic batching for small dynamic objects and static geometry, which could reduce the draw call counts and thus boost performance levels in complex scenes. No wonder the Unity hype keeps gathering momentum, some of these updates are liberating in many ways and should keep indie developers and students very busy.

Podcast discussion on architecture and video games

During the Brighton-based Develop 2009 conference earlier this week, Edge Online magazine editor Alex Wiltshire chaired a panel discussion on the close relationship between architecture and videogames, and here is a recording of the full session for to download (which is also extremely interesting IMO).

The panel included Viktor Antonov, the art director behind Half-Life 2 and Arkane's The Crossing, as well as a creative director and writer for animated feature films, Lionhead's Rob Watkins, who has worked with architect Foster And Partners and was artist on Fable and Fable II and Rory Olcayto, now features editor at The Architects Journal and once lead artist at developer Inner Workings in the late 90s.

http://dl.uksites.futureus.com/cvg/static/Edge%20Panel%20-%20Architecture%20and%20Videogames.mp3

Monday, 24 August 2009

Publication accepted for ACE 2009 conference

I've just got confirmation that a publication I co-authored with one of my final year students here at Bournemouth University has been accepted for publication at October's ACE 2009 (Advances In Computer Entertainment) conference in Athens, Greece. This will be in the poster track of the conference and will also appear at ACM's Digital Portal as part of the proceedings.

The publication is titled A Rule-Based Approach To 3D Terrain Generation via Texture Splatting and proposes a new novel way to create 3D landscapes for real-time applications.

Saturday, 22 August 2009

Photorealistic vs Non-photorealistic timelapse

Taken from Digital Urban, a great comparison test of two different skyline timelapses in different rendering styles. The first one is in photorealistic style and the second one in cartoon-rendered form (processed using the After Effects 'Cartoon' option). The interesting bit, as pointed out in Digital Urban, is that the skyline is much clearer in the latter.


While frame coherence is (expectedly for such a rapidly paced animation) not as good on the NPR timelapse, this is a good example of how alternative rendering can achieve better urban visualizations in terms of detail distinction and depth perception. The timelapse is of London, captured from a Camden building roof.

Research bid success

Yesterday I got confirmation that the European EU Leonardo funding research bid that I (and subsequently Bournemouth University) am a partner for, led by CV2 in Denmark and with other partners in various European countries, has been successfull. This bid was under the Transfer of Innovation call.

The project is called Game-iT, I will be posting more information about it at a later stage but essentially, proposes new methods based on Kolb's learning circle, ICT and new Web 2.0 tools in order to enable game-based learning. The project begins in October and runs for two years. Quite excited about this one, I will be posting many more updates about this once it is underway!

Tuesday, 18 August 2009

The future of gaming graphics according to Crytek co-founder

Some very useful insights in the way we'll be seeing real-time entertainment graphics were presented in the keynote at GDC Europe of Crytek co-founder Cevat Yerli who discussed the "future of gaming graphics" from the perspective of the German developer and CryEngine maker. Talking trends, Yerli observed that GPUs and CPUs are "on a collision course", as CPUs get more parallel and GPUs are moving towards more general-purpose computing. He recommended OpenCL as a good base for addressing the issue.

Yerli suggested that Crytek is estimating 2012 to 2013 for the next generation of home console hardware. But thanks to the success of the relatively horsepower-light Wii, "there's a big debate about whether there will be a next generation at all", he admitted. H also e suggested most games use artistic styles, physics and AI to differentiate themselves, at least up to 2012 when the next generations may arrive.


He then focused on the actual technical innovations that he feels will make a difference in graphics. For example, tech like point-based rendering is potentially faster than triangle-based rendering at certain higher qualities, and works well with levels of detail. On the other hand point-based rendering might define a certain super-high polygon look for game, Yerli said. However: "There's a lot of games today in the Top 10 which don't need that", he conceded, and content creation tools are almost exclusively based around triangles right now.

He also noted ray-tracing as a possible rendering method to move towards, and particularly recommended rasterization and sparse voxel octrees for rendering. Such principles will form "the core" of future technology for Crytek's next engine, Yerli said, and the goal is to "render the entire world" with the voxel data structure.

Concluding, Yerli suggested that, after 2013, there are opportunities with new APIs and hardware platforms to "mix and match" between multiple rendering models, with "a Renaissance of graphics programming", and visual fidelity on a par with movies such as Shrek and Ice Age rendered in real time.

Sunday, 16 August 2009

Top 10 comic book urban/city spaces

A very interesting article for anybody into alternative visualization of urban spaces and virtual cities, from Gotham City to Mega City One, the online Architects’ Journal presents a selection of the best depicted comic book urban locations, which is very well illustrated throughout and makes for great reading.

These include Radiant City (from comic Mr X), Metropolis (from Superman), Gotham City (from Batman), New York (from Daredevil) and others. It is extremely thought-provoking to see how cartoon illustrators have decided to depict urban spaces (and more importantly why) in all of these specific cases.

The article can be found at http://www.architectsjournal.co.uk/the-critics/top-10-comic-book-cities/5204772.article

Monday, 10 August 2009

Paris on the iPhone!

Following on from the first iPhone virtual city post below, the excellent Digital Urban blog brought to my attention the Mobile 3D City website...

After almost twenty man-years of R&D over a four year period, here it is, the first embodiment in a collection that will rapidly expand as each new opus is added, all according to the company themselves. What is Mobile 3D City? It is a collection of tourism guides developed specifically for mobile terminals (currently iPhone), in truly photo-realist 3D format, which is interactive and includes numerous points of interest. Paris is the first example of their work, showcased in the video below.


Mobile 3D City is also an international trademark deposed by the Newscape Technology company. This is a consortium that brings together in addition to that of Newscape Technology, the expertise of Computamaps, international market leaders in 3D high-definition photo-realist mapping, Cityzeum, a dedicated tourism company and finally Navidis, specializing in the implementation of geo-localized content.

http://www.mobile3dcity.com/

Friday, 7 August 2009

Urban tagging using augmented reality

A new application from a company called Metaio is using augmented reality (AR) in order to allow users to leave tweets, messages, web pages and 3D models in a real space for other users to view or pick up when there are in the vicinity.

This cool little application opens up a whole number of new development routes from augmented graffiti to leaving a virtual message outside someones apartment if they are out through to tagging locations, restaurants and services with virtual comments for users to view by simply pointing the mobile device at the location. I am personally particularly interested in the 3D model route. Imagine being able to see the interior of selected buildings (and navigate on around it as well) by staying on the pavement!

The application is currently under development for iPhone, Google Android, Windows Mobile and Symbian S60 platforms. Check the website out at http://www.metaio.com/

Manhattan 3D map on the iPhone, first iPhone 3D virtual city for navigation?

Working on my PhD writeup and I've just come across this, which reinforces the overall point on using 3D virtual cities for mobile navigation...

UpNext NYC is the first interactive 3D map on the iPhone that gives you the ability to explore Manhattan (or any city really). With UpNext you can fly and zoom through the city fluidly, in its full 3D glory, without network hiccups or download times.

Along the way you'll be shown restaurants, nightlife, shops, and all the places that are local favorites or highly rated. Tap a building to see all the businesses inside, or tap a subway station to see all the trains passing through. Search for bars, hair salons, sushi, or any of our other 50+ categories and you'll get all the results in your area.

Now if only the map was in an non-photorealistic style. :)

iTunes Link: http://www.itunes.com/app/upnextnyc

UpNext 3D NYC: http://www.upnext.com/iphone

Thursday, 6 August 2009

Using the power of a GPU for signal processing commands

Researchers in the Georgia Tech Research Institute (GTRI) and the Georgia Tech School of Electrical and Computer Engineering are developing programming tools to enable engineers in the defense industry to utilize the processing power of GPUs without having to learn the complicated programming language required to use them directly.

Mark Richards, a principal research engineer and adjunct professor in the School of Electrical and Computer Engineering, is collaborating with Campbell and graduate student Andrew Kerr to rewrite common signal processing commands to run on a GPU. This work is supported by the U.S. Defense Advanced Research Projects Agency and the U.S. Air Force Research Laboratory.

The researchers are currently writing the functions in Nvidia's CUDATM language, but the underlying principles can be applied to GPUs developed by other companies, according to Campbell. With GPU VSIPL, engineers can use high-level functions in their C programs to perform linear algebra and signal processing operations, and recompile with GPU VSIPL to take advantage of the speed of the GPU. Studies have shown that VSIPL functions operate between 20 and 350 times faster on a GPU than a central processing unit, depending on the function and size of the data set.

The research team is also assessing the advantages of GPUs by running a library of benchmarks for quantitatively comparing high-performance, embedded computing systems. The benchmarks address important operations across a broad range of U.S. Department of Defense signal and image processing applications.

For the future, the researchers plan to continue expanding the GPU VSIPL, develop additional defense-related GPU function libraries and design programming tools to utilize other efficient processors, such as the cell broadband engine processor at the heart of the PlayStation 3 video game console.

Wednesday, 5 August 2009

Building Rome In A Day urban modelling research project

A great research project for anybody into novel methods to do urban modelling; the Graphics and Imaging Laboratory of the University of Washington's Department of Computer Science and Engineering have considered the problem of reconstructing entire cities from images harvested from the web. The aim is to build a parallel distributed system that downloads all the images associated with a city from Flickr.com.

After downloading, it matches these images to find common points and uses this information to compute the three dimensional structure of the city and the pose of the cameras that captured these images. The movie above details their sample work using 58,000 images of Dubrovnik sourced from Flickr.

The results are stunning (and I love the slight NPR tinge!). Check the project out (called Building Rome In A Day) at http://grail.cs.washington.edu/rome/

Apple to make a foray into the game console market?

The big news recently is that Apple is reportedly prepping a new touch-screen tablet computer for launch later this year. First reports mention that the system will likely feature a ten inch touch-screen and be similar in appearance and feel to a large iPod Touch.

Expected to launch in September for around for $800 (£500), it is reported that the device will be positioned as a home media hub capable of streaming content and services,and that it will double as a game console. Apple has so far refused to comment on rumours.

It should be remembered that Apple has enjoyed notable success in the games market in recent times, with games dominating the Apple Store download charts and an ever-increasing number of established publishers / developers pledging to support to its fast-growing iPhone and iPod touch platforms, so this really is no big surprise...

Quite how successfull this move could be remains to be seen as Sony, Microsoft and Nintendo are already in fierce competition for a share of the market.