Inverse problems form a central part for researchers in pattern analysis and machine intelligence. Data is assumed to be generated by an "underlying process" linear or non-linear and the goal is to "understand" the process. By understanding here I mean having an increased control on the predictability of the data. There are two main top-level categories of processes that effect the data we observe: (1) The true/hidden process and (2) the data acquisition process. The actual goal is to understand (1). The more we account for (2) explicitly the better versed we can be in the real inverse problem of understanding (1).
I have worked on contour grouping and robot mapping problems where the goal was to use 2D spatial data and range data respectively for recognition and navigation applications. In computer vision and robotics main stream pretty much focuses on inferring the computers very specific abilities of humans like recognition and navigation without worrying the true underlying process that generates recognition and navigation abilities. It's like relaxing the problem of predictability to constrained replicability using the observed data without all the way going back to the process that generated the original data. I should mention some people are working in that direction.
Lately I have been working on brain image data. Besides using 3D spatial data there is quite an effort to reach closer to the original process that still has a lot of fertility in terms of academic careers. Since brain imaging has medical implications the conclusions/applications tend to be conservative and hence the goal becomes to get to the real process as closely as possible before we generate new applications from observed data. To put it in different words, it's a very conservative data mining AI application. Yesterday I met Daniel Rowe who was all advertising about his grand unified theory (GUT) about fMRI data processing to account for (2) as much as possible in a unified way. This is very useful as it allows us to get closer to the real (1).
Life is a very interesting process too that generates lots of data in Nature. Understanding life is a very hard inverse problem. We need as much data as possible to be able to confidently understand non-trivial facts about life. Hence the basic assumption of life is to sustain it as long as we can and for that we need to make it valuable and interesting without influencing independent will too much that can reduce the utility of the data. Many generations have been trying to understand the process using contemporary analytical tools. In some sense its hardness is what actually makes it interesting, as I kind of discussed about it before.
I thought about writing about this about 6 months ago. Finally almost as a total random event, I just decided to write it up tonight! It's hard to completely explain the underlying process of my thoughts :) A key problem in data analysis is scale. I would like to post about it sometime but for a nice discussion about this problem in the context of computer vision, look at this paper by Song-Chun Zhu whose work I really admire.