Skip to content

Luis Herranz

Researcher in computer vision, machine learning and multimedia (Universidad Autónoma de Madrid)    

  • Home
  • Research
    • Overview
  • Publications
  • Talks
  • Blog
  • Datasets
  • Contact

Author: lherranz

Source-free unsupervised domain adaptation

November 9, 2023November 14, 2023 continual learning, deep learning, domain adaptation, transfer learning

Can we perform unsupervised domain adaptation without accessing source data? Recent works show that it is not only possible but also very effective. In

Read More

Protected: Improving the perception of low-light enhanced images

January 20, 2023January 20, 2023 uncategorized

There is no excerpt because this is a protected post.

Read More

Compression for training on-board machine vision: distributed data collection and dataset restoration for autonomous vehicles

September 28, 2022 deep learning, generative adversarial networks, image compression

Unmanned vehicles require large amounts of diverse data to train their machine vision modules. Importantly, data should include rare yet important events that the

Read More

MAE, SlimCAE and DANICE: towards practical neural image compression

September 17, 2022 continual learning, deep learning, image compression, transfer learning

Neural image and video codecs achieve competitive rate-distortion performance. However, they have a series of practical limitations, such as relying on heavy models, that

Read More

Neural image compression in a nutshell (part 2: architectures and comparison)

August 31, 2022October 8, 2022 deep learning, image compression

Neural image codecs typically use specific elements in their architectures, such as GDN layers, hyperpriors and autoregressive context models. These elements allow exploiting contextual

Read More

Neural image compression in a nutshell (part 1: main idea)

August 24, 2022January 16, 2025 deep learning, image compression

Neural image compression (a.k.a. learned image compression) is a new paradigm where codecs are modeled as deep neural networks whose parameters are learned from

Read More

Mix and match networks (part 2)

February 8, 2021September 1, 2022 deep learning, generative adversarial networks, transfer learning

This is a brief update on mix and match networks (M&MNets), describing the new ideas included in the extended version (IJCV 2020). An earlier

Read More

MeRGANs: generating images without forgetting

October 29, 2018March 8, 2021 continual learning, deep learning, generative adversarial networks, transfer learning

The problem of catastrophic forgetting (a network forget previous tasks when learning a new one) and how to address it has been studied mostly

Read More

Learning RGB-D features for images and videos

October 17, 2018February 17, 2021 deep learning, RGB-D, transfer learning

Depth sensors capture information that complements conventional RGB data. How to combine them in an effective multimodal representation is still actively studied, and depends

Read More

Mix and match networks

August 31, 2018November 5, 2023 deep learning, generative adversarial networks, transfer learning

We recently explored how we can take multiple seen image-to-image translators and reuse them to infer other unseen translations, in an approach we call

Read More

Posts navigation

Older posts