DETEKSI CITRA WAJAH PALSU UNTUK KEPERLUAN FORENSIK DIGITAL BERBASIS DEEP CONVOLUTIONAL NEURAL NETWORK

  • DELVINA TRI AGUSTIN
  • 14002623

ABSTRAK

 

ABSTRAK Nama : Delvina Tri Agustin NIM : 14002623 Program Studi : Ilmu Komputer Jenjang : Strata Dua (S2) Konsentrasi : Image Processing Judul Tesis : Deteksi Citra Wajah Palsu Untuk Keperluan Forensik Digital Berbasis Deep Convolutional Neural Network Sejalan dengan perkembangan teknologi, gambar digital semakin mudah dimanipulasi, didukung oleh berbagai software editing berteknologi tinggi. Hal ini menjadi tantangan ketika pemalsuan gambar menjadi objek diskusi hukum di pengadilan, khususnya bagi tim peneliti forensik digital untuk mengidentifikasi gambar aslinya. Kami mengusulkan metode DCNN untuk mengklasifikasikan keaslian gambar wajah digital. DCNN telah terbukti cukup kuat untuk mengatasi sebagian besar keterbatasan yang menghambat algoritma pengenalan wajah berdasarkan fitur buatan tangan, termasuk perbedaan dalam iluminasi, pose, ekspresi, dan oklusi. Arsitektur DCNN yang digunakan dalam penelitian ini adalah Mobile Net, VGG 16, NasNet Mobile, Inception V3, dan DenseNet 201. Setiap arsitektur diatur dengan variasi tingkat pembelajaran yang berbeda untuk mendapatkan hasil perbandingan yang maksimal berdasarkan akurasi, presisi, recall dan skor kappa. Penelitian ini bermaksud untuk membantu penegak hukum, khususnya mereka yang bekerja di bidang forensik digital, menentukan kebenaran foto digital, khususnya gambar wajah palsu. Selain itu, kami mengantisipasi bahwa penelitian kami akan digunakan dalam membuat program yang akan memungkinkan verifikasi otomatis keakuratan citra wajah di masa depan. Kata kunci: Forensik Digital, DCNN, Komparasi, Learning Rate
 

KATA KUNCI

Leightweight Deep Convolutional Neural Network


DAFTAR PUSTAKA

 

DAFTAR PUSTAKA [1] N. Do, I. Na, and S. Kim, “Forensics Face Detection From GANs Using Convolutional Neural Network,” no. August, 2018. [2] J. C. Neves, R. Tolosana, R. Vera-Rodriguez, V. Lopes, H. Proença, and J. Fierrez, “GANprintR: Improved Fakes and Evaluation of the State of the Art in Face Manipulation Detection,” IEEE J. Sel. Top. Signal Process. , vol. 14, no. 5, pp. 1038–1048, 2020, doi: 10.1109/JSTSP.2020.3007250. [3] X. Wang, H. Guo, S. Hu, M.-C. Chang, and S. Lyu, “GAN-generated Faces Detection: A Survey and New Perspectives (2022),” 2022, [Online]. Available: http://arxiv.org/abs/2202.07145. [4] S. Negi, M. Jayachandran, and S. Upadhyay, “Deep fake?: An Understanding of Fake Images and Videos,” Int. J. Sci. Res. Comput. Sci.
Eng. Inf. Technol., pp. 183–189, 2021, doi: 10.32628/cseit217334. [5] B. Petrovska, T. Atanasova-Pacemska, R. Corizzo, P. Mignone, P. Lameski, and E. Zdravevski, “Aerial scene classification through finetuning with adaptive learning rates and label smoothing,” Appl. Sci., vol. 10, no. 17, pp. 1–25, 2020, doi: 10.3390/app10175792. [6] O. M. Parkhi, A. Vedaldi, and A. Zisserman, “Deep Face Recognition Abstract only,” Procedings Br. Mach. Vis. Conf. 2015, no. Section 3, pp. 41.1-41.12, 2015, [Online]. Available: http://www.bmva.org/bmvc/2015/papers/paper041/index.html. [7] L. Verdoliva, “Media Forensics and DeepFakes: An Overview,” IEEE J.
Sel. Top. Signal Process. , vol. 14, no. 5, pp. 910–932, 2020, doi: 10.1109/JSTSP.2020.3002101. [8] D. Kim, H. U. Jang, S. M. Mun, S. Choi, and H. K. Lee, “Median Filtered Image Restoration and Anti-Forensics Using Adversarial Networks,” IEEE
Signal Process. Lett. , vol. 25, no. 2, pp. 278–282, 2018, doi: 10.1109/LSP.2017.2782363. [9] I. Kandel and M. Castelli, “The effect of batch size on the generalizability of the convolutional neural networks on a histopathology dataset,” ICT
Express, vol. 6, no. 4, pp. 312–315, 2020, doi: 10.101 6/j.icte.2020.04.010. [10] R. A. Jacobs, “Increased rates of convergence through learning rate adaptation,” Neural Networks, vol. 1, no. 4, pp. 295–307, 1988, doi: 10.1016/0893-6080(88)90003-2. [11] L. N. Smith, “Cyclical learning rates for training neural networks,” Proc. -
2017 IEEE Winter Conf. Appl. Comput. Vision, WACV 2017, no. April 2015, pp. 464–472, 2017, doi: 10.1109/WACV.2017.58. [12] R. A. Cahyadri, Apa Yang Harus Ditanyakan Kepada Ahli Digital
Forenics, 1st ed. YOGYAKARTA: cv.budi utama, 2021. [13] Eddy Armi, Bukti Elektronik dalam Praktik Peradilan, . Jakarta: Sinar Grafika, 2020. [14] J. W. G. Putra, “Pengenalan konsep pembelajaran mesin dan deep learning,” Comput. Linguist. Nat. Lang. Process. Lab. , vol. 4, pp. 1 –235, 2019, [Online]. Available: https://www.researchgate.net/publication/323700644. [15] Maiti and Bidinger, “„Digital Repository Universitas Jember Digital
58
Program Studi Ilmu Komputer (S2) Universitas Nusa Mandiri Repository Universitas Jember,?” J. Chem. Inf. Model. , vol. 53, no. 9, pp. 1689–1699, 1981. [16] Y. Harjoseputro, “Convolutional Neural Network (Cnn) Untuk Pengklasifikasian Aksara Jawa,” Buana Inform., p. 23, 2018. [17] O. M. Parkhi, A. Vedaldi, and A. Zisserman, “Deep Face Recognition,” no. Section 3, pp. 41.1 -41.12, 2015, doi: 10.5244/c.29.41. [18] P. N. Candra and A. Prapanca, “Klasifikasi Gambar Asli dan Manipulasi Menggunakan Error Level Analysis (ELA) Sebagai Proses Komputasi Metode Convolutional Neural Network (CNN),” J. Informatics Comput.
Sci., vol. 2, no. 01, pp. 9–18, 2020, doi: 10.26740/jinacs.v2n01.p9-18. [19] R. C. Gonzalez, “Deep Convolutional Neural Networks [Lecture Notes],”
IEEE Signal Process. Mag. , vol. 35, no. 6, pp. 79–87, 2018, doi: 10.1109/MSP.2018.2842646. [20] B. K. Triwijoyo, “Model Fast Tansfer Learning pada Jaringan Syaraf Tiruan Konvolusional untuk Klasifikasi Gender Berdasarkan Citra Wajah,”
MATRIK J. Manajemen, Tek. Inform. dan Rekayasa Komput. , vol. 18, no. 2, pp. 211–221, 2019, doi: 10.30812/matrik.v18i2.376. [21] I. Serna, A. Morales, J. Fierrez, M. Cebrian, N. Obradovich, and I. Rahwan, “Algorithmic discrimination: Formulation and exploration in deep learningbased face biometrics,” CEUR Workshop Proc. , vol. 2560, pp. 146–152, 2020. [22] K. Setiono, Y. Kristian, and G. Gunawan, “Deteksi Citra Pornografi Memanfaatkan Deep Convolutional Neural Network,” J. Intell. Syst.
Comput., vol. 3, no. 1, pp. 01 –06, 2021, doi: 10.52985/insyst.v3i1.172. [23] and A. J. S. Aston Zhang, Zachary C. Lipton, Mu Li, “Dive into DeepLearning,” p. 987, 2020. [24] A. Fuadi and A. Suharso, “Perbandingan Arsitektur Mobilenet Dan Nasnetmobile Untuk Klasifikasi Penyakit Pada Citra Daun Kentang,” JIPI
(Jurnal Ilm. Penelit. … , vol. 07, no. September, pp. 701 –710, 2022, [Online]. Available: http://www.jurnal.stkippgritulungagung.ac.id/index.php/jipi/article/view/30 26. [25] S. S. Liew, M. Khalil-Hani, S. Ahmad Radzi, and R. Bakhteri, “Gender classification: A convolutional neural network approach,” Turkish J. Electr.
Eng. Comput. Sci. , vol. 24, no. 3, pp. 1248–1264, 2016, doi: 10.3906/elk- 1311-58. [26] A. Boopathy, T. W. Weng, P. Y. Chen, S. Liu, and L. Daniel, “CNN-Cert: An efficient framework for certifying robustness of convolutional neural networks,” 33rd AAAI Conf. Artif. Intell. AAAI 2019, 31st Innov. Appl.
Artif. Intell. Conf. IAAI 2019 9th AAAI Symp. Educ. Adv. Artif. Intell. EAAI
2019, pp. 3240–3247, 2019, doi: 10.1609/aaai.v33i01.33013240. [27] H. Gholamalinezhad and H. Khosravi, “Pooling Methods in Deep Neural Networks, a Review,” no. September, 2020, [Online]. Available: http://arxiv.org/abs/2009.07485. [28] Y. Achmad, R. C. Wihandika, and C. Dewi, “Klasifikasi emosi berdasarkan ciri wajah wenggunakan convolutional neural network,” J. Pengemb.
Teknol. Inf. dan Ilmu Komput. , vol. 3, no. 11, pp. 10595–10604, 2019. [29] S. H. S. Basha, S. R. Dubey, V. Pulabaigari, and S. Mukherjee, “Impact of
59
Program Studi Ilmu Komputer (S2) Universitas Nusa Mandiri fully connected layers on performance of convolutional neural networks for image classification,” Neurocomputing, vol. 378, no. November, pp. 112– 119, 2020, doi: 10.1016/j.neucom.2019.10.008. [30] C. Ferrari, G. Lisanti, S. Berretti, and A. Del Bimbo, “Investigating Nuisances in DCNN-Based Face Recognition,” IEEE Trans. Image
Process., vol. 27, no. 11, pp. 5638–5651, 2018, doi: 10.1109/tip.2018.2861359. [31] S. Chen, H. Zhang, and J. Yang, “A Face Quality Evaluation Method Based[1] C. Ferrari, G. Lisanti, S. Berretti, and A. Del Bimbo,” Proc.
32nd[1] C. Ferrari, G. Lisanti, S. Berretti, A. Del Bimbo, “, pp. 544–549, 2020, doi: 10.1109/CCDC49329.2020.9164265. [32] S. Sriram, A. Shashank, R. Vinayakumar, and K. P. Soman, “DCNN-IDS: Deep Convolutional Neural Network Based Intrusion Detection System,”
Commun. Comput. Inf. Sci. , vol. 1213, no. February 2021, pp. 85–92, 2020, doi: 10.1007/978-981 -15-9700-8_7. [33] S. Lee, M. Kim, and I. Joe, “SGNet: Design of Optimized DCNN for RealTime Face Detection,” Parallel Distrib. Comput. Appl. Technol. , pp. 200– 209, 2019, doi: 10.1007/978-981 -13-5907-1_21. [34] B. C. L. Adiatma, E. Utami, and A. D. Hartanto, “PENGENALAN EKSPRESI WAJAH MENGGUNAKAN DEEP CONVOLUTIONAL NEURAL NETWORK[1] C. ShuangYe, Z. [1] C. ShuangYe, Z. HongLu, and Y. JianMin,” EXPLORE, vol. 11, no. 2, p. 75, 2021, doi: 10.35200/explore.v11i2.478. [35] R. S. Kute, V. Vyas, and A. Anuse, “Component-based face recognition under transfer learning for forensic applications,” Inf. Sci. (Ny)., vol. 476, pp. 176–191, 2019, doi: 10.1016/j.ins.2018.10.014. [36] M. Jacquet and C. Champod, “Automated face recognition in forensic science: Review and perspectives,” Forensic Sci. Int. , vol. 307, p. 1 10124, 2020, doi: 10.1016/j.forsciint.2019.110124. [37] S. Sriyati, A. Setyanto, and E. E. Luthfi, “Literature Review: Pengenalan Wajah Menggunakan Algoritma Convolutional Neural Network,” J.
Teknol. Inf. dan Komun. , vol. 8, no. 2, 2020, doi: 10.30646/tikomsin.v8i2.463. [38] E. Zangeneh, M. Rahmati, and Y. Mohsenzadeh, “Low resolution face recognition using a two-branch deep convolutional neural network architecture,” Expert Syst. Appl. , vol. 139, pp. 1 –11, 2020, doi: 10.1016/j.eswa.2019.112854. [39] Y. Kristian, I. K. E. Purnama, E. H. Sutanto, L. Zaman, E. I. Setiawan, and M. H. Purnomo, “Klasifikasi Nyeri pada Video Ekspresi Wajah Bayi Menggunakan DCNN Autoencoder dan LSTM,” J. Nas. Tek. Elektro dan
Teknol. Inf., vol. 7, no. 3, pp. 308–316, 2018, doi: 10.22146/jnteti.v7i3.440. [40] M. Zufar and B. Setiyono, “Convolutional Neural Networks Untuk Pengenalan Wajah Secara Real-Time,” J. Sains dan Seni ITS, vol. 5, no. 2, p. 128862, 2016. [41] K. Enriquez, “Faster face detection using Convolutional Neural Networks & the Viola-Jones algorithm,” no. May, 2018. [42] X. Lin, T. Chen, T. Zhu, K. Yang, and F. Wei, “Automated forensic analysis of mobile applications on Android devices,” Digit. Investig. , vol.
60
Program Studi Ilmu Komputer (S2) Universitas Nusa Mandiri 26, 2018, doi: 10.1016/j.diin.2018.04.012. [43] M. Paoletti, J. Haut, J. Plaza, and A. Plaza, “Deep\&Dense Convolutional Neural Network for Hyperspectral Image Classification,”
Remote Sens., vol. 10, no. 9, p. 1454, 2018, doi: 10.3390/rs10091454. [44] J. Kim, M. Ra, and W.-Y. Kim, “A DCNN-Based Fast NIR Face Recognition System Robust to Reflected Light From Eyeglasses,” IEEE
Access, vol. 8, pp. 80948–80963, 2020, doi: 10.1109/access.2020.2991255. [45] S. R. Putra, “Implementasi Convolutional Neural Network Untuk Klasifikasi Obyek Pada Citra,” 2015, [Online]. Available: http://repository.its.ac.id/71292/1/5111100076-Undergraduate Thesis.pdf. [46] S. Hadinisa et al., “Analisis Learning Rate pada Metode Transfer Learning untuk Sistem Pendeteksi Api,” Semin. Nas. Microwave, Antena dan
Propagasi 2018 Unpak, pp. 8–11, 2018. [47] A. G. Howard et al., “MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications,” 2017, [Online]. Available: http://arxiv.org/abs/1704.04861. [48] S. Tammina, “Transfer learning using VGG-16 with Deep Convolutional Neural Network for Classifying Images,” Int. J. Sci. Res. Publ. , vol. 9, no. 10, p. p9420, 2019, doi: 10.29322/ijsrp.9.10.2019.p9420. [49] Q. Guan et al., “Deep convolutional neural network VGG-16 model for differential diagnosing of papillary thyroid carcinomas in cytological images: A pilot study,” J. Cancer, vol. 10, no. 20, pp. 4876–4882, 2019, doi: 10.7150/jca.28769. [50] F. Baldassarre, D. G. Morín, and L. Rodés-Guirao, “Deep Koalarization: Image Colorization using CNNs and Inception-ResNet-v2,” no. June 2017, pp. 1–12, 2017, [Online]. Available: http://arxiv.org/abs/1712.03400. [51] E. G. Winarto et al., “Implementasi Arsitektur Inception Resnet-V2 untuk Klasifikasi Kualitas Biji Kakao,” pp. 132–137, 2021. [52] S. H. Wang and Y. D. Zhang, “DenseNet-201-Based Deep Neural Network with Composite Learning Factor and Precomputation for Multiple Sclerosis Classification,” ACM Trans. Multimed. Comput. Commun. Appl., vol. 16, no. 2s, 2020, doi: 10.1145/3341095. [53] A. Rahim, K. Kusrini, and E. T. Luthfi, “Convolutional Neural Network untuk Kalasifikasi Penggunaan Masker,” Inspir. J. Teknol. Inf. dan
Komun., vol. 10, no. 2, p. 109, 2020, doi: 10.35585/inspir.v10i2.2569. [54] A. Rohim, Y. A. Sari, and Tibyani, “Convolution neural network (cnn) untuk pengklasifikasian citra makanan tradisional,” J. Pengemb. Teknol.
Inf. dan Ilmu Komput. , vol. 3, no. 7, pp. 7038–7042, 2019, [Online]. Available: http://j-ptiik.ub.ac.id/index.php/j-ptiik/article/view/5851/2789.
 

Detail Informasi

Tesis ini ditulis oleh :

  • Nama : DELVINA TRI AGUSTIN
  • NIM : 14002623
  • Prodi : Ilmu Komputer
  • Kampus : Margonda
  • Tahun : 2022
  • Periode : I
  • Pembimbing : Dr. Hilman Ferdinandus Pardede, ST, M.EICT
  • Asisten :
  • Kode : 0020.S2.IK.TESIS.I.2022
  • Diinput oleh : RKY
  • Terakhir update : 22 Mei 2023
  • Dilihat : 170 kali

TENTANG PERPUSTAKAAN


PERPUSTAKAAN UNIVERSITAS NUSA MANDIRI


E-Library Perpustakaan Universitas Nusa Mandiri merupakan platform digital yang menyedikan akses informasi di lingkungan kampus Universitas Nusa Mandiri seperti akses koleksi buku, jurnal, e-book dan sebagainya.


INFORMASI


Alamat : Jln. Jatiwaringin Raya No.02 RT08 RW 013 Kelurahan Cipinang Melayu Kecamatan Makassar Jakarta Timur

Email : perpustakaan@nusamandiri.ac.id

Jam Operasional
Senin - Jumat : 08.00 s/d 20.00 WIB
Isitirahat Siang : 12.00 s/d 13.00 WIB
Istirahat Sore : 18.00 s/d 19.00 WIB

Perpustakaan Universitas Nusa Mandiri @ 2020