Andrew F
- Research Program Mentor
PhD candidate at University of North Carolina - Chapel Hill (UNC Chapel Hill)
Expertise
Multimedia systems, back-end engineering, data compression, computer vision
Bio
My favorite software problems are focused on making systems faster, more practical, and more useful. With emerging video and computer vision technologies, many researchers don't care about how big the data is. In order for these technologies to take hold in the real world, however, we have to work to make the data smaller, more efficient, and more easily communicated to other systems. My work in the area of multimedia compression is all about figuring out how to do this. Some of my recent work has been on developing large-scale computer vision applications for the NSA/DoD, and computer vision for Amazon Web Services data centers. I believe that a person learns by doing, and I love helping students work on big projects and learn a lot along the way! Outside of research, I enjoy landscape photography and astrophotography. I love going out to camp, hike, and find cool slices of nature, and taking pictures helps me remember those times. In fact, my love for photography has helped fuel my interest in multimedia research.Project ideas
Distributed Image Sequence Compression
Suppose you take several images in quick succession. Or suppose you record many images as a timelapse sequence. A lot of the data between those images will be similar, but the photos will be stored as individual, standalone files. Why? Doesn't this waste a lot of storage space? In this project, we'll examine what an image file is from the ground up. We'll see how video compression works to limit file sizes for subsequent images, but does not easily allow for individual frame extraction and viewing. We'll develop a lossless compression technique for image sequences, where each file stores only the difference between that image and the previous image. Finally, we'll write a program to open and display images in the sequence, automatically decoding them. You will write real software that can be published on GitHub and shared with the world!
Making software fast (and safe)
The Rust programming language is quickly taking the world by storm. It is one of the fastest languages in terms of execution times (80x faster than Python in many cases), yet also one of the easiest languages to write in. In this project, I will help you learn to program in Rust, and how to port software written in other languages, such as Python, Java, and C/C++. We will use benchmarking tools to measure the speed of Rust code to equivalent code in other languages, and write a paper about the different languages' strengths and weaknesses.
Data compression review paper: how to make the world work fast!
In this project, we'll explore the mathematical backbone of data compression, learn about different methods of lossless compression, and discuss developments in lossy compression for images and video. The student will write a review paper encompassing these, and have the opportunity to program some data compression techniques for hands-on experience.
AI Summary Application
The rise of machine learning large language models, such as GPT, has led to an explosive growth in the use of AI in applications. In this project, we'll examine the principles of AI, how GPT works, and build an application that can leverage a large language model to do something useful! An example is a desktop application that allows a user to take a screenshot of an article or document; then the text is extracted from the image and fed to ChatGPT with a prompt along the lines of "summarize this piece of text." The user can read the summary and have options to refine it based on their needs (e.g., "shorter summary," "summarize it in simpler terms," etc.).