Novel GAN-Based Image Completion: Addressing Structure and Texture Consistency in Missing Regions

Authors

  • Seyyed Ahmad Edalatpanah Department of Applied Mathematics, Ayandegan Institute of Higher Education, Tonekabon, Iran.
  • Dragan Marinkovic Faculty of Mechanical Engineering and Transport Systems, TU Berlin, Germany.
  • Zeynab Parandavar Department of Computer Engineering, Ayandegan Institute of Higher Education, Tonekabon, Iran

Keywords:

Image completion, Deep learning, Generative adversarial networks, Texture synthesis, Structure reconstruction, Convolutional neural networks

Abstract

The use of Deep Neural Networks (DNNs) to solve Image Completion (IC) has emerged as a popular research topic, as this study demonstrates. Completion algorithms must handle structure and texture properly in order to generate realistic results because they are two essential components of images. To fix an image, several modern techniques employ the end-to-end framework, which ignores texture and structure in particular. From the outcomes, deformed structures and uneven textures are frequently obtained. The sketch completion network and a texture completion network are contained in a novel IC method is suggested. The objective of Generative Adversarial Network (GAN) is to restore the sketch structures in the missed portion of an image. By representing the two components separately in a DNN, the proposed approach not only successfully synthesizes semantically valid and visually reliable data in the missing region but also allows a user to change the structure characteristics in that region dynamically. Graph Neural Network (GNN) creates consistent texture data in the missing area with the sketch output and the surrounding partial image.   

References

Alkobi, N., Shaham, T. R., & Michaeli, T. (2023). Internal diverse image completion. Proceedings of the

IEEE/CVF conference on computer vision and pattern recognition (pp. 648–658). IEEE. https://doi.org/10.1109/CVPRW59228.2023.00072

Chen, Q., Li, G., Xiao, Q., Xie, L., & Xiao, M. (2020). Image completion via transformation and structural constraints. EURASIP journal on image and video processing, 2020(44), 1–18. https://doi.org/10.1186/s13640-020-00533-3

Peng, X., Zhao, H., Wang, X., Zhang, Y., Li, Z., Zhang, Q., Wang, G., Peng, G., & Liang, H. (2023). C3N: content-constrained convolutional network for mural image completion. Neural computing and applications, 35(2), 1959–1970. https://doi.org/10.1007/s00521-022-07806-0

Wan, Z., Zhang, J., Chen, D., & Liao, J. (2021). High-fidelity pluralistic image completion with transformers. Proceedings of the IEEE/CVF international conference on computer vision (pp. 4692–4701). IEEE. https://doi.org/10.1109/ICCV48922.2021.00465

Chandak, V., Saxena, P., Pattanaik, M., & Kaushal, G. (2019). Semantic image completion and enhancement using deep learning. 2019 10th international conference on computing, communication and networking technologies (ICCCNT) (pp. 1–6). IEEE. https://doi.org/10.1109/ICCCNT45670.2019.8944750

Akimoto, N., Kasai, S., Hayashi, M., & Aoki, Y. (2019). 360-degree image completion by two-stage conditional gans. 2019 IEEE international conference on image processing (ICIP) (pp. 4704–4708). IEEE. https://doi.org/10.1109/ICIP.2019.8803435

Guo, J., & Liu, Y. (2019). Image completion using structure and texture GAN network. Neurocomputing, 360, 75–84. https://doi.org/10.1016/j.neucom.2019.06.010

Jiang, Y., Xu, J., Yang, B., Xu, J., & Zhu, J. (2020). Image inpainting based on generative adversarial networks. IEEE access, 8, 22884–22892. https://doi.org/10.1109/ACCESS.2020.2970169

Shao, C., Li, X., Li, F., & Zhou, Y. (2022). Large mask image completion with conditional GAN. Symmetry, 14(10), 2148. https://doi.org/10.3390/sym14102148

De Souza, V. L. T., Marques, B. A. D., Batagelo, H. C., & Gois, J. P. (2023). A review on generative adversarial networks for image generation. Computers & graphics, 114, 13–25. https://doi.org/10.1016/j.cag.2023.05.010

Shamsolmoali, P., Zareapoor, M., & Granger, E. (2023). Image completion via dual-path cooperative filtering. ICASSP 2023-2023 IEEE international conference on acoustics, speech and signal processing (ICASSP) (pp. 1–5). IEEE. https://doi.org/10.1109/ICASSP49357.2023.10097260

Minglan, Z., Weiqi, C., Yisheng, Z., Chun, Z., Linfu, S., & Min, H. (2023). A multi-scale deep image completion model fused capsule network. 2023 18th international conference on intelligent systems and knowledge engineering (ISKE) (pp. 288–293). IEEE. https://doi.org/10.1109/ISKE60036.2023.10481245

Reddy, B. S. K., SaiGnaneswar, J., Balaji, K., Reddy, K. S. V., & Reddy, K. L. (2023). Deep learning based image enhancing environment with noise suppression. 2023 second international conference on electronics and renewable systems (ICEARS) (pp. 1339–1344). IEEE.

https://doi.org/10.1109/ICEARS56392.2023.10084977

Xu, X., Navasardyan, S., Tadevosyan, V., Sargsyan, A., Mu, Y., & Shi, H. (2023). Image completion with heterogeneously filtered spectral hints. Proceedings of the IEEE/CVF winter conference on applications of computer vision (pp. 4591–4601). IEEE. https://doi.org/10.1109/WACV56688.2023.00457

Xu, Z., Zhang, X., Chen, W., Yao, M., Liu, J., Xu, T., & Wang, Z. (2023). A review of image inpainting methods based on deep learning. Applied sciences, 13(20), 11189. https://doi.org/10.3390/app132011189

Chen, D., Huang, T., Song, Z., Deng, S., & Jia, T. (2023). Agg-net: attention guided gated-convolutional network for depth image completion. Proceedings of the IEEE/CVF international conference on computer vision (pp. 8853–8862). IEEE. https://doi.org/10.1109/ICCV51070.2023.00813

Li, Y., & Yao, T. (2021). Image completion based on edge prediction and improved generator. Tehnički vjesnik, 28(5), 1590–1596. https://doi.org/10.17559/TV-20210616090311

Agrawal, P., & Kaushik, S. (2020). Image completion of highly noisy images using deep learning. Computational vision and bio-inspired computing: ICCVBIC 2019 (pp. 1031–1043). Springer, cham. https://doi.org/10.1007/978-3-030-37218-7_108

Talouki, A. G., Koochari, A., & Edalatpanah, S. A. (2024). Image completion based on segmentation using neutrosophic sets. Expert systems with applications, 238, 121769. https://doi.org/10.1016/j.eswa.2023.121769

Yu, J., Lin, Z., Yang, J., Shen, X., Lu, X., & Huang, T. S. (2018). Generative image inpainting with contextual attention. Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 5505–5514). IEEE. https://doi.org/10.1109/CVPR.2018.00577

Teixeira, L., Oswald, M. R., Pollefeys, M., & Chli, M. (2020). Aerial single-view depth completion with image-guided uncertainty estimation. IEEE robotics and automation letters, 5(2), 1055–1062. https://doi.org/10.1109/LRA.2020.2967296

Yan, Z., Li, X., Li, M., Zuo, W., & Shan, S. (2018). Shift-net: image inpainting via deep feature rearrangement. Proceedings of the european conference on computer vision (ECCV) (pp. 1–17). Springer, cham. https://doi.org/10.1007/978-3-030-01264-9_1

Radford, A., Metz, L., & Chintala, S. (2015). Unsupervised representation learning with deep convolutional generative adversarial networks. ArXiv preprint arxiv:1511.06434. https://doi.org/10.48550/arXiv.1511.06434

Mirza, M., & Osindero, S. (2014). Conditional generative adversarial nets. ArXiv preprint arxiv:1411.1784. https://doi.org/10.48550/arXiv.1411.1784

Zheng, C., Cham, T. J., & Cai, J. (2019). Pluralistic image completion. Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (pp. 1438–1447). IEEE. https://doi.org/10.1109/CVPR.2019.00153

Huang, C., & Yoshida, K. (2017). Evaluations of image completion algorithms: exemplar-based inpainting vs. Deep convolutional GAN. Deep convolutional gan. https://cs231n.stanford.edu/reports/2017/pdfs/306

Wu, X., Li, R. L., Zhang, F. L., Liu, J. C., Wang, J., Shamir, A., & Hu, S. M. (2019). Deep portrait image completion and extrapolation. IEEE transactions on image processing, 29, 2344–2355. https://doi.org/10.1109/TIP.2019.2945866

Ji, J., & Yang, G. (2019). Image completion with large or edge-missing areas. Algorithms, 13(1), 14. https://doi.org/10.3390/a13010014

Cai, J., Han, H., Shan, S., & Chen, X. (2019). FCSR-GAN: joint face completion and super-resolution via multi-task learning. IEEE transactions on biometrics, behavior, and identity science, 2(2), 109–121. https://doi.org/10.1109/TBIOM.2019.2951063

Guo, K., Cao, R., Kui, X., Ma, J., Kang, J., & Chi, T. (2019). LCC: towards efficient label completion and correction for supervised medical image learning in smart diagnosis. Journal of network and computer applications, 133, 51–59. https://doi.org/10.1016/j.jnca.2019.02.009

Egbert, A., Martone, A., Baylis, C., & Marks, R. J. (2020). Partial load-pull extrapolation using deep image completion. 2020 IEEE texas symposium on wireless and microwave circuits and systems (WMCS) (pp. 1–5). IEEE. https://doi.org/10.1109/WMCS49442.2020.9172302

Haselmann, M., Gruber, D. P., & Tabatabai, P. (2018). Anomaly detection using deep learning based image completion. 2018 17th IEEE international conference on machine learning and applications (ICMLA) (pp. 1237–1242). IEEE. https://doi.org/10.1109/ICMLA.2018.00201

Published

2024-07-13

How to Cite

Novel GAN-Based Image Completion: Addressing Structure and Texture Consistency in Missing Regions. (2024). Computational Engineering and Technology Innovations, 1(1). https://ceti.reapress.com/journal/article/view/20