In the rapidly evolving field of data science, the trend towards larger datasets has significantly benefited deep learning model training, particularly in image feature extraction. However, this trend has also led to models with increasingly deep layers, resulting in higher computational costs and time requirements. This raises an important question: Can we optimize the depth of machine learning models for smaller datasets without significantly compromising performance? This study focuses on the ImageNet dataset, particularly exploring the potential of DeepGCN, a deep graph convolutional network, on its subset, mini-ImageNet, by reducing layer numbers. Utilizing loss landscape visualization, an innovative method, this research aims to visually assess model optimization efficiency and challenge the conventional belief that more layers always lead to higher accuracy. This exploration is crucial for developing adaptable and efficient deep models, highlighting the importance of selecting appropriate layers in the age of AI and large models.