Automatic Background Removal


It’s a common practice now a day to replace existing background from a photo or video with nicer image or another video. There are a number of Deep Learning algorithms works which detects a solid colored background thereby removes it from the photo or video. The other option is masking of the background that we want to remove. Though this is a time consuming process involves manual resources. Deep learning has seen a lot of applications in various aspects of computer vision and image processing, some of which include — Image Classification, Image Color restoration, Pixel reconstruction, pose estimation, photo descriptions, object detection and object segmentation in images.

A number of researchers and developers have developed multiple techniques to replace the background with the new one.

State of Art

Authors in [1], have proposed a method that generates soft segments that communicate to semantically significant regions in the image by fusing the high-level information from a neural network with low-level image features fully automatically. They have shown that by cautiously defining affinities between different regions in the image, the soft segments with the semantic boundaries can be revealed by spectral analysis of the constructed Laplacian matrix. The proposed relaxed sparsification method for the soft segments can generate accurate soft transitions while also providing a sparse set of layers as shown in Figure 1. They have demonstrated that while semantic segmentation and spectral soft segmentation methods fail to provide layers that are accurate enough for image editing tasks, our soft segments provide a convenient intermediate image representation that makes several targeted image editing tasks trivial, which otherwise require the manual labor of a skilled artist.

Figure 1 [1]: Layers that represent the semantically meaningful regions as well as the soft transitions between them, automatically by fusing high-level and low-level image features in a single graph structure.

Developers in [2], develop a custom algorithm with multiple stages. They classify image regions around persons with AI, and then optimize the results to improve the edges detecting foreground layers and separating them from the background. It uses extra algorithms for improving fine details and preventing color contamination. The AI detects persons as foreground and everything else as background. So, it only works if there is at least one person in the image. 

Developers in [3], Cloudinary is a cloud-based service that provides solutions for image and video management, including server or client-side upload, on-the-fly image and video manipulations, quick CDN delivery, and a variety of asset management options. The Cloudinary AI Background Removal add-on combines a variety of deep-learning algorithms to recognize the primary foreground object(s) in a photo and accurately remove the background in a matter of seconds. You can optionally specify one of a set of object names to instruct the add-on to remove everything except that object. Users can also include additional directives for indicating what the AI algorithm should treat as the foreground object to keep. 

Figure 2:                Cloudinary results

Developers in [4] develop we proposed a novel end-to-end boundaryaware model, BASNet, and a hybrid fusing loss for accurate salient object detection. The proposed BASNet is a predict-refine architecture, which consists of two components: a prediction network and a refinement module. Combined with the hybrid loss, BASNet is able to capture both large-scale and fine structures, e.g. thin regions, holes, and produce salient object detection maps with clear boundaries. Experimental results on six datasets demonstrate that proposed model outperforms other 15 state-of-the-art methods in terms of both region-based and boundary-aware measures. Additionally, the proposed network architecture is modular. It can be easily extended or adapted to other tasks by replacing either the predicting network or the refinement module.


  1. ACM Trans. Graph., Vol. 37, No. 4, Article 72. Publication date: August 2018.

Leave a Reply

Your email address will not be published. Required fields are marked *