As part of our collaborative research work with BAE Systems, we wish to recruit a research student for an industrial case PhD study on Cooperative High Resolution Mosaicing for Autonomous Ground Vehicles
Images from a camera onboard Unmanned (or remotely piloted) Vehicles may be very helpful in perceiving information about the environment of operation but is often plagued with limited spatial and temporal field of views, disorienting rotations, and noisy and distorted images. These problems collectively make it very difficult for human viewers to identify objects of interest as well as inferring the right decisions and actions during the vehicle operation. In many applications of autonomous vehicles such as land surveillance, military operations and disaster monitor and digital city, it is often required that images offer a 3D information, a wide Field of View (FOV), and the different viewing directions. Indeed, objects seen within acquired videos tend to move quickly through the viewing stream, the time a user can really evaluate the scene or look back at objects that may have been interesting can dramatically be shortened, often make it very challenging for vehicle users to identify objects of interest actually captured within the acquired video. For the above mentioned reasons, the construction of mosaic images and the use of such images become necessary for vehicle mapping applications.
So, whilst a single autonomous vehicle with its set of vision sensors can achieve simple mission objectives, a cooperating set of vehicles can provide better performance by incorporating additional and different information to that available to the single platform. This is not a surprise: it is after all the basic premise of data fusion.
A networked set of vision sensors will enable new capability by allowing multiple mobile vision sensor platforms acting as an adaptive, distributed mapping sensor providing visual information that increases the likelihood of mission success. The challenge for such a system of cooperative vision sensors is to behave in a predictive global manner to enable the environment to be explored, objects identified and tracked by providing complementary perception information.
In this work, the objective is to build highly enough resolved 2D/3D video mosaic of imaging streams acquired from vision sensors mounted on each of the cooperative vehicles. Cooperative mosaicing will be, in this case, considered at the level of each vehicle to obtain its panoramic view and most importantly between the networked vision sensors of vehicles that are linked to the command station to provide the full map of the operation environment. The challenge of this study is propose methodologies that will be adapted for delivering the challenging real time highly resolved 2D/3D video mosaicing of the moving platform and for which rotational and translational camera motions is also considered.
We propose collaboration with our industrial partner BAE Systems to investigate techniques of robust natural feature extraction as our adaptive SIFT, or SURF descriptors to characterize and register images based on robust norms estimation of the Homography. The use of such mathematical techniques will help into discarding outliers affecting our individual imaging data. The use of these norms (L1, Linfinity) will also help in optimizing the resolution of the 2D mosaic by taking into account and in a robust manner all possible geometric and photometric transformations that the cooperative video streams are submitted to and in building the 3D video mosaic based on the multiple views obtained from the multiple vehicle platforms to be exploited by the user at the command station.
It is planned, for this study, to have a laboratory mobile robotic setup including onboard vision sensors at the premises of the university to proceed with the validation of developed algorithms and presenting a lab prototype of the system we aim to achieve. It is also planned to use facilities of the company (BAE Systems) to gather more realistic data from the ground vehicles available there and to validate the work proposed on more sophisticated real scenarios.
For this research studentship, we are seeking a talented graduate, having (or be expected to obtain) at least an upper second class honours degree (first class honours preferred), MSc or equivalent in electrical engineering or computer science. Good mathematical background and experience of signal and Image processing, data fusion and real time systems would be most desirable.
We are looking for a UK national student (or a European resident in the UK for last three years) only. The studentship is for three years and provides payment of tuition fees at the UK/EU rate plus a £15,000 to 17,000 per annum (depending on qualification and normally tax free) contribution to living expenses.
Starting date: 2010 (October 2010 is the latest date)
To apply send a CV and a covering letter by email to: Dr Aouf n.aouf[at]cranfield.ac.uk
- PhD Studentship in Dept of Electronic and Electrical Engineering, University of Starthclyde, Glasgow, Scotland
- PhD Studentships in Intelligent Transport Systems (ITS), Oxford Brookes University, UK
- MSc Research Studentship on Fatigue Performance of Laser Peened Aluminium Alloys, Cranfield University, UK
- MBA Scholarships (High Tech), School of Management, Cranfield University, UK
- PhD Studentships in Film and Media, Bradford Media School, University of Bradford, UK
- MSc by Research Studentship in Smart Grids Greener Energy, Cranfield University, UK
- MSc by Research in Documentation Process of Medical Devices, Cranfield University, UK
- MSc by Research Scholarship in Aerospace Best Practice Benchmarking, Cranfield University, UK
- MSc by Research Studentship in Developing e-marketing Capability, Cranfield University, UK
- PhD Studentship, Alzheimers Society, University of Edinburgh, UK
Disclaimer: Every effort has been made to ensure the above information is current and correct. However, applicants should contact the appropriate administering body before making an application, as details do change frequently.