The advance in computation capabilities has enabled much new technology to boom such as Artificial Intelligence and also accelerates the development of Intelligent Transportation Systems (ITS). Autonomous Vehicles (AV), considered as an important application in AI are followed by more data-driven applications that require a huge amount of computation resources. Though the Cloud Servers can help conduct those computations, the high latency will not satisfy the requirement since many computation tasks are both computationally-intensive and delay-sensitive. Vehicle Edge Computing (VEC), on the other hand, utilizes Edge Servers and is expected to solve these problems by interacting with the user vehicle. Therefore, how the user vehicle will decide whether to upload or offload the task to the nearby Edge Server is a tricky problem to be solved. In this paper, we will approach the decision-making in computation offloading problems by utilizing Deep Reinforcement Learning Method, Deep Q-Network (DQN) for V2R (Vehicle to RSU) transmission. For this purpose, a simulation environment is established to conduct our experiments. The results find that DQN-powered vehicles outperform all other non-agent vehicles that adopt a random offloading strategy in terms of task completion rate.