Gradient-Controlled Gaussian Kernel for Image Inpainting

Document Type : Research Article


Department of Electrical Engineering, Vali-e-Asr University of Rafsanjan, Rafsanjan, Iran


Image inpainting is the process of filling in damaged or missing regions in an image by using information from known regions or known pixels of the image. One of the most important techniques for inpainting is convolution-based methods, in which a kernel is convolved with the damaged image iteratively. Convolution based algorithms are very quick, but they don’t have good results in structures and textural regions and result in blurring. The kernel size in the convolution-based algorithm is a critical parameter. The large size results in edge blurring, and if the kernel size is small, the information may not be sufficient for reconstruction. In this paper, a novel convolution-based algorithm is proposed that uses known gradient of the pixels to construct a convolution mask. In this algorithm, the kernel size is controlled by the gradient of the image in the known regions. The algorithm computes the weighted sum of the known pixels in a neighborhood around a damaged pixel and replaces the value in the place of that damaged pixel. The proposed algorithm is fast and results in good edges and smooth regions reconstruction. It is an iterative algorithm and its implementation is very simple. Experimental results show the effectiveness of our algorithm.


Main Subjects

[1]  M. Bertalmio, G. Sapiro, V. Caselles, and C. Ballester, “Image Inpainting,” SIGGRAPH: Proceedings of the 27th Annual Conference on Computer Graphics and Interactive Techniques, pp. 417 -424, 2000.
[2] T. F. Chan, J. Shen, S. H. Kang, “Euler’s elastica and curvature based inpainting,” SIAM Journal of Applied Mathematics, vol. 63, I. 2, pp. 564-592, 2002.
[3] M. Bertalmio, A. L. Bertozzi and G. Sapiro, “Navier[1]stokes fluid dynamics, and image and video inpainting”, Proceedings of IEEE Computer Vision and Pattern Recognition (CVRP), 2001.
[4] H. Grossauer and O. Scherzer, “Using complex ginzburg[1]landau equation for digital inpainting in 2d and 3d,” Scale Space Method in Computer Vision, Lecture Notes in Computer Science, 2695, 2003.
[5] P. Chen and Y. Wang, “Fourth-order partial differential equations for image inpainting”, International Conference on Audio, Language and Image Processing, Shanghai, pp. 1713-1717, 2008.
[6] L. li and H. Yu, “Nonlocal curvature-driven diffusion model for image inpainting,” Fifth International Conference on Information Assurance and Security, Xi’an, pp. 513-516, 2009.
[7] X. C. Tai, S. Osher, and R. Holm, “Image inpainting using a TV-stokes equation,” Mathematics and Visualization, Springer, Berlin, Heidelberg, pp 3-22, 2007.
[8] A. Telea, “An image inpainting technique based on the fast marching method,” Journal of Graphics Tools, vol. 9, no. 1, 2004.
[9] P. Li, S. Li, Z. Yao, and Z. Zhang, “Two anisotropic fourth-order partial differential equations for image inpainting,” IET Image Processing, vol. 7, no. 3, pp. 260–269, Jun. 2013
[10] M. Ghorai, S. Samanta, S. Mandal and B. Chanda, “Multiple pyramids-based image inpainting using local patch statistics and steering kernel feature,” IEEE Transactions on Image Processing, vol. 28, no. 11, pp. 5495-5509, Nov. 2019.
[11] K. He and J. Sun, “Image completion approaches using the statistics of similar patches,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 36, no. 12, pp. 2423-2435, 1 Dec. 2014.
[12] J. Yu, Z. Lin, J. Yang, X. Shen, X. Lu, and T. S. Huang, “Generative image inpaining with contextual attention,” IEEE Computer Vision and Pattern Recognition, pp. 5505–5514, 2018.
[13] H. Liu, X. Bi, G. Lu and W. Wang, “Exemplar-based image inpainting with multi-resolution information and the graph cut technique,” IEEE Access, vol. 7, pp. 101641-101657, 2019.
[14] A. Efros and T. Leung, “Texture synthesis by non[1]parametric sampling,” Proceedings of the Seventh IEEE International Conference on Computer Vision, pp. 1033 - 1038, vol. 2, Greece, 1999.
[15] A. Criminisi, P. P´erez, and K. Toyama, “Object removal by exemplar-based inpainting,” IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Madison, WI, USA, pp1200-1212, 2003.
[16] [16] D. J. Tuptewar and A. Pinjarkar, “Robust exemplar[1]based image and video inpainting for object removal and region filling,” International Conference on Intelligent Computing and Control (I2C2), Coimbatore, pp. 1-4, 2017.
[17] A. Wong and J. Orchard, “A nonlocal-means approach to exemplar-based inpainting,” 15th IEEE International Conference on Image Processing, San Diego, CA, pp. 2600-2603, 2008.
[18] J. Huang, S. Kang, N. Ahuja, and J. Kopf, “Image completion using planar structure guidance,” ACM Transaction on Graphics, vol. 33, no. 4, pp. 129:1– 129:10, Jul. 2014. [Online]. Available: https://github. com /jbhuang0604/Struct Completion.
[19] L. Deng, T. Huang, and X. Zhao, “Exemplar-based image inpainting using a modified priority definition,” Plos One, vol. 10, no. 10, pp. 1–18, Oct. 2015. [Online]. Available: codes.html
[20] D. Ding, S. Ram and J. J. Rodr´ıguez, “Image inpainting using nonlocal texture matching and nonlinear filtering,” IEEE Transactions on Image Processing, vol. 28, no. 4, pp. 1705-1719, April 2019.
[21] M. Bertalmio, L. Vese, G. Sapiro and S. Osher, “Simultaneous Structure and Texture Image Inpainting,” IEEE Transactions on Image Processing, vol. 12, no. 8, pp. 882-889, Aug. 2003.
[22] J. Liu, S. Yang, Y. Fang and Z. Guo, “Structure-guided image inpainting using homography transformation,” IEEE Transactions on Multimedia, vol. 20, no. 12, pp. 3252-3265, Dec. 2018.
[23] F. Altinel, M. Ozay and T. Okatani, “Deep structured energy-based image inpainting,” 24th International Conference on Pattern Recognition (ICPR), Beijing, pp. 423-428, 2018.
[24] D. Kim, S. Woo, J. Lee and I. S. Kweon, “Recurrent temporal aggregation framework for deep video inpainting,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 42, no. 5, pp. 1038-1052, 1 May 2020.
[25] H. Xiong, C. Wang, X. Wang and D. Tao, “Deep representation calibrated bayesian neural network for semantically explainable face inpainting and editing,” IEEE Access, vol. 8, pp. 13457-13466, 2020.
[26] J. Yu, Z. Lin, J. Yang, X. Shen, X. Lu, and T. S. Huang, “Free-form image inpainting with gated convolution,” arXiv preprint arXiv:1806.03589, 2018.
[27] M. Oliveira, B. Bown, R. Mckenna, and Y. S. Chang, “Fast digital image inpainting,” International Conference on Visualization, Imaging and Image Processing (VIIP 2001), Marbella, Spain, September 3-5, pp. 261-266, 2001.
[28] M. M. Hadhoud, K. A. Moustafa and S. Z. Shenoda , “Digital images inpainting using modified convolution based method,” Proceedings of SPIE - The International Society for Optical Engineering, 2008.
[29] H. Noori, S. Saryazdi, and H. Nezamabadipoor, “A bilateral image inpainting,” Iranian Journal of Science and Technology (IJST), Transactions of Electrical Engineering, vol. 35, no. E2, pp 95-108, 2011.
[30] D. N. Anh, “An adaptive bilateral filter for inpainting,” Fourth International Conference of Emerging Applications of Information Technology, Kolkata, pp. 237-242, 2014.
[31] C. Saharia, W. Chan, H. Chang, Ch. Lee, J. Ho, T. Salimans, D. Fleet, M. Norouzi, “Palette: Image-to[1]Image Diffusion Models,” arXiv:2111.05826v2 [cs.CV] ,3 May 2022.