top of page

Updates

Midterm Update

     Current research is being done into publicly available servers and their latency to us here in Victoria. One of the options for cloud computation being Amazons “EC2” data servers. Using https://www.cloudping.info/ a latency of ~20-30ms is possible to their California datacenter however, this is only for a ping request and not a packet containing data. It is estimated the 30ms latency will be quite close to the maximum amount of allowed for VR/AR applications. Further testing will be done by spinning up a server in Amazons California based datacenter and making data heavy requests to the server. Data transfer times will be measured here.

​

     A better option for cloud computation of VR/AR would be usage of a 5G network coupled with edge  computing/caching. AWS additionally offers an edge computation services called “Lambda@Edge” https://aws.amazon.com/lambda/edge/. Further investigation into this service is needed to see if it may be usable to implementation purposes. If it grants a usable latency, and if the servers are powerful enough to render some VR imagery, this system will be used for the implementation.

​

     If both Amazon EC2 and Lambda@Edge are deemed too slow, or cannot satisfy any other constraints, testing will be done on a local network. This is not optimal; however, it is a very good sandbox to test the feasibility of VR/AR computations. With this initial sandboxing, the next step would be further research into potential cloud computation services.

March 12th Update

Since the midterm update I have spun up two AWS EC2 instances. The first, in Ohio, offered pings of ~80ms which is well outside of our acceptable range. The second, in California, fed us consistent pings of ~30ms which is what we would have expected based on previous testing. Based on these results, we will use the California server location for the remainder of our testing.

​

Additionally, I have applied for an AWS educate account. This will grant access to a $35 credit allowing us to spin up GPU equipped servers for further testing.

​

For simplicity, I will be creating a Unity environment with a ground plane, a skybox, and one intractable cube. The ground plane + skybox will be rendered by the cloud server while the intractable cube will be rendered by the local machine. If final implementations prove to be too difficult or time consuming to fit inside the final report deadline, we will simplify further.

​

Further simplification would involve ONLY sending the skybox + ground plane as a non-active render. That is, see the response time of the server to render a single frame. With this, we can interpolate the approximate render speed achievable through the server. Final results will be generated based off this data.

bottom of page