Final Year Projects 96/97

Projects I've put in this year are as follows; these are slightly updated with respect to the handbook of projects that has been published.


Introduction to Projects in VR systems architecture

Virtual Reality is a little dull. Demonstrations are simplistic and unimpressive, the headsets and navigation methods are crude, and the wildly exaggerated claims of what can be achieved today lead first to excitement, but then to disappointment and disillusion when it can't be delivered.

Behind the hype though serious work and interesting things are going on. VR does offer a quantum step forward in our way of interacting with computers. The wonder elements aren't really due to new technology though - even at its best its just fast 3D graphics plus movement tracking - the difference and novelty is the paradigm shift at the human end; way's of using human perceptual skills that we haven't been able to engage before in computer interfaces.

You couldn't learn to juggle or ride a bicycle from a screen/keyboard interface, nor can you really get a good feeling for what the inside of a proposed building will really be like - you can look at 3D models on a screen but thats little better than seeing photo's, and its certainly whole orders of magnitude different to being there. VR peripherals of sufficient quality to work well aren't here yet, but they will be, and the potential is clear. In the mean time there are a lot of problems to be addressed.

One thing that will happen medium-term is that we'll move away from the hand coded one-off demonstrations towards full-scale VR systems. These will have many applications running together in the world, and multiple worlds that you can move between, each supporting different kinds of task. The whole thing will need to be dynamic with applications, worlds and users dynamically coming into being and being destroyed, in much the same way that applications and users come and go on a workstation. That means we need something like an operating system to provide the necessary facilities to the application/world builders.

Deva

In the AIG this is an area we have been looking at for several years now, and our current system to address it is called Deva. It's the work of Steve Pettifer who's doing his Ph.D. in this area.

There are three areas that have been identified where final year projects could sensibly contribute to this effort.

Deva Touch-stone - controlling worlds

You should first read the Introduction to Projects in VR Architecture above

Deva allows worlds, complete with laws (such as gravitation and the associated mass attribute) to be dynamically created. Worlds with similar laws can be derived from them, resulting in a class hierarchy of worlds. These worlds are populated with objects that may have behaviours, and be associated with applications. Movement of objects between worlds with different laws is supported sensibly. In this way applications can find a suitable home, or create one, quite straightforwardly.

It's all very dynamic as we don't want to have to stop the whole system and recompile it in order to add a new object/user/world/law/application etc (like smalltalk, but we can't afford the kind of performance penalty that smalltalk's implementation imposes). That being so, the universe can become a very confusing place: where did the calculator get its idea of gravity from? which laws operate in the current world - where did they come from? what objects exist here today? which world is most appropriate for my cad application? can I take my micrometer with me to relativistic world? - where is relativistic world? how do I get there?.

The project

The Deva Virtual World Manger is a tool to help us keep track of it all. For this project you'd need to get to grips with the way that Deva organises the multiverse and the objects that populate it. Then it is a case of inventing interesting ways of looking at these structures. Several approaches are possible: command-line, 'smalltalk' style browsers or full-blown VR.

We'd start off looking for a class browser. An ASCII interface would be a beginning, and there's a public domain browser for c++ which may be of use. A helpful feature of the project is that it doesn't depend heavily on having to book time on particular machines, - so there shouldn't be any queuing up for resources.

To take on this project you will need to be good at picking up new things, working with complex ideas and producing a pretty solid implementation. There's plenty of scope for producing something clever here.

References

additional details

Grade:C (challenging)
Single/Joint hons:single
Number of Students:1
Course Prerequisites:none
Equipment:Suns and AIG machines

No special equipment is required, though some access to the AIG machines may prove useful.


Deva SpaceMan - managing space

You should first read the Introduction to Projects in VR Architecture above

Deva runs across multiple processors and has some clever ways of handling the whereabouts of things so that it is transparent to the applications programmer/world architect, yet incurs very little performance overhead.

Though we can ship objects around the processors transparently, a current debate is how best to handle the distribution of "virtual space" between the machines; i.e. who knows what is where in the virtual worlds. This is especially important when we want to be informed when something hits something else. A naive solution could easily destroy performance, so it's important to do it efficiently with minimum to-ing and fro-ing of messages between machines.

There are ideas for how this would be best achieved, and what's needed is for someone to implement the distributed collision detection so we can find out just how optimal it can be made in practice.

Some of the issues in collision detection involve determining the mathematical intersections of various 3D shapes, so a good grasp of the relevant maths would be helpful.

The algorithms and modeling of distribution can be developed quite independently of Deva, which makes it easier for a project student to work on them. By the end of the project though they would have to be integrated with the working Deva system.

References

Additional details

Grade:C (challenging)
Single/Joint hons:single
Number of Students:1
Course Prerequisites:none
Equipment:Suns and AIG machines


Deva Oracle - managing the renderer

You should first read the Introduction to Projects in VR Architecture above

At the end of the day, there will be people experiencing the virtual worlds that Deva models. That means something is going to have to render the scenes. We will call that the "renderer". Within the AIG there is a lot of graphical expertise, and a major effort is going into constructing a library of graphical and VR-related tools known as MAVERIK. Deva will use MAVERIK's rendering capability to present the virtual world to its inhabitants. Bearing in mind that Deva may be running on a parallel processor, and that the renderer may be on another graphic workstation, how does Deva tell MAVERIK what to draw? We certainly don't want to send lists of pixels or polygons - that would be a major bottleneck between Deva's engines and the rendering machine. Deva should clearly send quite high level descriptions and changes down to the renderer "ball-3, direction_vector xyz, velocity v", "lettuce-5 follows spline 37 at velocity v morphing to cabbage-7", "small-furry-animal-19 explodes (again)".

In that example we have allowed ourselves to specify not just the scene, but linear (or simple Newtonian) to describe motions too. The renderer doesn't know why things are as they are, it just wants to know where they are and what they look like. Deva may be modeling some complex dynamical behaviour of a system, but it can pass on a simple approximation which will give a good idea where the objects will be for the immediate future, so the renderer can press on using simple maths for current locations, and doesn't have to ask where things have moved to every frame. The technique is known as "dead-reckoning". All this helps cut down the amount of communication needed between Deva and the renderer.

However, MAVERIK exists at a somewhat lower level than that, and understands things such as "draw this object, draw that object". The objects it knows how to draw are simple solids or anything an application can tell it how to render.

Clearly there is a gap. Something needs to remain cognizant of messages from Deva, whilst continually updating MAVERIKS's view of the world. The clever bit there has to take care of the simple mechanics, caching of commonly used object representations, and possibly such things as following animation tracks and morphs. The goal is to cut down bandwidth and give a clean high level interface between Deva and MAVERIK.

So there is quite a lot of scope, new things to be learnt, and you could help define what an appropriate structure for the Deva graphics messages should be.

To take on this project, you will need to have an understanding of computer graphics, be able to innovate and have good coding skills.

References

Additional details

Grade:C (challenging)
Single/Joint hons:single
Number of Students:1
course prerequisites: You should be doing the graphics courses.
Equipment:Suns and AIG machines

Much of the clever work is actually independent of having to render anything. However MAVERIK runs on the AIG machines so you would need to book some time on those.


Graphics Related Projects

These projects are related to work in the AIG, and are best described as "graphcis related".

Poly-filler

When modeling a scene to be rendered graphically, the objects are usually described in terms of polygons. Looking at the model closely often reveals many gaps that have crept in where the polygons or objects do not quite meet where they should. Sometimes this is due to errors in measurement of real objects being modeled, where for example the roof may not quite meet the walls. Even where co-ordinates for an artificial scene are calculated directly, errors are common.

In the AIG both our own models and those received them from outside sources often have gaps which have to be painstakingly corrected. It's not just that they are irritating. In computing scene illumination using radiosity, light leaks in or out from these gaps so they become highly evident, and must be fixed. Working out exactly which points are involved can be quite time consuming.

Automatic correction of such errors is difficult, as it's not possible to distinguish such gaps from intentional fissures, or objects that just happen to be close together. However it should be possible to produce an algorithm to identify good candidates. These areas could be rendered as polygons and highlighted in the model. We would then need a way of wandering around the model selecting these gaps and telling the "poly-filler" to close, or discard them. The poly-filler would then write out a corrected version of the model.

Additional details

Grade:S/C (standard/challenging)
Single/Joint hons:single
Number of Students:1
course prerequisites: You should be doing the graphics courses.
Equipment:Suns and AIG machines
About half the project could be done on any workstation. For the remainder, which would require writing some of the code for interacting with the model, you would need to book time on the AIG machines.


Digitising Tablet

A digitiser is like a mouse but senses the absolute, rather than relative, position of the puck/pen. This means you can put a plan or drawing on the pad and trace it into the computer. The digitiser hardware streams out the current co-ordinates of the pen, and the button status.

We do have a digitiser but no software for it, and we would like to be able to digitise plans such as buildings to help build models. What is required therefore is software to read in the stream of coordinates and allow the picture to be edited on the screen. Functions such as snapping to a grid, constraining lines to be horizontal or vertical, and labeling objects would be desirable. Ultimately it would be nice to be able to input 3D objects in this way by using both plan and elevation information.

The output format would be in the Manchester Scene Description Language (MSDL) which is a straight forward textual description of simple graphic objects.

Code for the project could be written in c or c++ and X could be used as the drawing library. However, the languages Perl or TCL together with the graphic library TK could well be a more attractive alternative.

Additional details

Grade:S/F (standard/flexible)
Single/Joint hons:single
Number of Students:1
course prerequisites: You would find the graphics courses useful.
Equipment:Suns and AIG machines

Any Unix machine could be used for development, which could even be done using a mouse. The live system would need to be a machine into which the digitiser is plugged.


Laboratory related projects

This project is a project that will help us with the running of the undergraduate laboratories.

The Coconut Shy

A coconut shy is something you get in a fareground arcade where you get a number of balls and try to knock over coconuts to score points or win prizes.

Our ARCADE, the laboratory management system, should have a Coconut Shy too. It would be a thing that students would be able to submit their programs to and have them run against a battery of tests. Some feedback and a score would be returned to them, saying what demonstration mark they scored, and identifying common problems. When they finally decide to submit their work for real it would be run against a different battery of tests and the score and date of submission would count.

If this could be done there are a number of significant advantages for both staff and students. Students get to try their code against the marking scheme several times, so they shouldn't get penalised for trivial omissions or syntax they hadn't realised were required, and they could get useful feedback for commonly diagnosed problems without having to wait around for a demonstrator. Marking would be more objectively consistent too, but for "style" marks we would still rely on staff and demonstrators. Demonstrators time is freed up too as they don't need to spend all their time marking demonstrations and checking whether a program does exactly what its supposed to do against some complex test listing; that means there will be more effort available for helping people.

Clearly there are a number of issues to explore. How and in what ways can we generate marks for the performance of a program - especially if it doesn't perform perfectly? How do we diagnose common problems? How do students submit their work and get feedback? What are the security implications?

Equally clearly it all looks quite possible to do, and indeed the University of Warwick do just that. For this project we would take the trial case of the CS2051 process scheduling exercise and try and make our Coconut Shy for it.

The coding could be done in any language. C or C++ are obvious choices, but Perl is probably the most attractive option.

Additional details

Grade:S/F (standard/flexible)
Single/Joint hons:single
Number of Students:1
course prerequisites: none.
Equipment:Suns


Artificial Life

Artificial life is concerned with the study of life-like behaviours in computer simulations. The intention is to learn more about the processes of life by causing analogous behaviour to arise within the controlled environment of the computer - an ideal vehicle to put theories to the test. One major branch of Alife study is to create environments in which evolutionary forces can act and where life-like behaviour can arise and evolve within the simulation.

A number of approaches to this are possible. Some attempt to simulate creatures and their metabolism, others make use of the idea of a computer instruction set and executing processes as being representative of life-forms. Parasitic and symbiotic forms have been reported as arising in such simulations. Some have focused efforts on "primordial soup" experiments where they attempt to synthesise life-like behaviour from simple components and random interactions. Some have focused their efforts on the mysterious pre-cambrian explosion before which only simple single-celled organisms existed, and immediately after which an enormously rich and varied range of life-forms arose, the majority of which died out quite quickly.

For this project the student would need to select an area of interest within the Alife theater. A number of previous projects have been undertaken and may provide useful ideas for starting points. We are also assisted by the interests of Dr Malcolm Shute, previously of this department, who has specified a number of Alife projects that could be undertaken, and who takes an active interest in the area. The main pre-requisite however is to have enthusiasm for the study of Alife, the imagination to create an interesting environment and the persistence and skill to implement it such that it will do something useful. You should be warned though, that although it seems easy to have some good ideas for what to do actually making anything interesting happen is very hard.

Additional details

Grade:F (flexible)
Single/Joint hons:joint or single
Number of Students:1
course prerequisites: none, though AI courses may be useful.
Equipment:Suns

Though this project is not formally associated with activities in the AI stream, you may find their modules useful.


Adrian's home page