I'm a PhD student at the GRASP Lab of the University of Pennsylvania, advised by Kostas Daniilidis.
Previously, I completed my BSc & MEng in Electrical and Computer Engineering at the National Technical University of Athens (NTUA), where I was honored to work with
Petros Maragos.
My research interests are in the area of 3D computer vision and specifically in Dynamic neural rendering, motion decomposition, tracking, and human reconstruction.
DynMF is a sparse trajectory decomposition that enables robust per-point tracking.
In addition to NVS, it allows us to control trajectories, enable/disable them, leading to new ways of video editing,
dynamic motion decoupling, and novel motion synthesis.
We employ SMPL-X, a contemporary parametric model that enables joint extraction of 3D body
shape, face and hands information from a single image. We use this holistic 3D reconstruction for the Sign Language Recognition task.
we attempt to recognize musical instruments in polyphonic audio by only feeding their raw waveforms
into deep learning models. Various recurrent and convolutional architectures incorporating residual
connections are examined and parameterized in order to build end-toend classifiers with low computational cost and only minimal preprocessing.
We present an approach for instrument classification in polyphonic music using monophonic training data that involves mixing-augmentation methods.
Specifically, we experiment with pitch and tempo-based synchronization, as well as mixes of tracks with similar music genres.