PEMANFATAN STOP MOTION PADA DESAIN IKLAN UNHAR

  • Khairunnisa Khairunnisa universitas harapan medan
  • Mufida Khairani universitas harapan medan
Keywords: Stopmotion, , University of Harapan Medan, , advertisement

Abstract

The stopmotion method is an animation method that takes several images and converts the images into video form, creating an illusion so that the images appear to be moving. This method is used to make the University of Harapan Medan advertisement look attractive and unique so that it is accessible to all groups. It is hoped that this can help as a form of promotion and marketing from University of Harapan Medan

Downloads

Download data is not yet available.

References

[1] A. S. Fauci and D. M. Morens, “Zika virus in the Americas—yet another arbovirus threat,” New England journal of medicine, vol. 374, no. 7, pp. 601–604, 2016.
[2] N. Jones, “How COVID-19 is changing the cold and flu season,” Nature, vol. 588, no. 7838, pp. 388–390, 2020.
[3] A. Morales-Sánchez and E. M. Fuentes-Pananá, “Human viruses and cancer,” Viruses, vol. 6, no. 10, pp. 4047–4079,2014.
[4] A. Suryawan and A. Endaryanto, “Perkembangan Otak dan Kognitif Anak: Peran Penting Sistem Imun pada Usia Dini,”Sari Pediatri, vol. 23, no. 4, pp. 279–284, 2021.
[5] D. Widiastuti and R. Sekartini, “Deteksi dini, faktor risiko, dan dampak perlakuan salah pada anak,” Sari Pediatri, vol.7, no. 2, pp. 105–112, 2016.
[6] S. Rahman, M. Ramli, F. Arnia, R. Muharar, M. Zen, and M. Ikhwan, Convolutional Neural Networks Untuk Visi Komputer Jaringan Saraf Konvolusional untuk Visi Komputer (Arsitektur Baru, Transfer Learning, Fine Tuning, dan Pruning). Deepublish, 2021.
[7] A. Krizhevsky, I. Sutskever, and G. E. Hinton, “Imagenet classification with deep convolutional neural networks,” in Advances in neural information processing systems, 2012, pp. 1097–1105.
[8] C. Szegedy et al., “Going deeper with convolutions,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2015, pp. 1–9.
[9] K. Simonyan and A. Zisserman, “Very deep convolutional networks for large-scale image recognition,” arXiv preprint arXiv:1409.1556, 2014.
[10] K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 770–778.
[11] F. N. Iandola, S. Han, M. W. Moskewicz, K. Ashraf, W. J. Dally, and K. Keutzer, “SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and< 0.5 MB model size,” arXiv preprint arXiv:1602.07360, 2016.
[12] M. Sandler, A. Howard, M. Zhu, A. Zhmoginov, and L.-C. Chen, “Mobilenetv2: Inverted residuals and linear bottlenecks,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2018, pp. 4510–4520.
[13] N. Ma, X. Zhang, H.-T. Zheng, and J. Sun, “Shufflenet v2: Practical guidelines for efficient cnn architecture design,” in Proceedings of the European conference on computer vision (ECCV), 2018, pp. 116–131.
[14] M. Hussain, J. J. Bird, and D. R. Faria, “A study on cnn transfer learning for image classification,” in UK Workshop on Computational Intelligence, 2018, pp. 191–202.
[15] R. Roslidar, K. Saddami, F. Arnia, M. Syukri, and K. Munadi, “A study of fine-tuning CNN models based on thermal imaging for breast cancer classification,” in 2019 IEEE International Conference on Cybernetics and Computational Intelligence (CyberneticsCom), 2019, pp. 77–81.
Published
2025-01-30
Section
Articles

Most read articles by the same author(s)