top of page


QMIND Computer Vision
I worked as a part of a group on developing a class-conditioned variational autoencoder (VAE) to improve vectorized sketch images so that they more closely resemble their intended class. This model, distinct for its use of class labels for conditioning, incorporates a bidirectional LSTM encoder for creating a latent representation of input sketches and a decoder that reconstructs sketches with enhanced class-specific features. Demonstrated through experiments on multiple classes, this approach outperformed traditional methods by generating more identifiable sketches aligned with their target classes.
bottom of page