I'm highly interested in VR & AR's capacity as a platform to connect people to the digital realm in more meaningful, human ways.

Here are some things i've created in VR.



VR CASE STUDY / 6.29.17

PLATFORM & SDK :: Google Cardboard & Gear VR

TOOLS :: UNITY 5.5, C#, Cinema 4D, 3DS Max, Mesh Mixer, Visual Studio


banner copy.jpg

I created this game for fun, to teach myself a bit about VR, coding, and the things involved with building a game from scratch. It was my FIRST attempt at creating anything in VR.


I started with zero knowledge of C# or Unity. As such, I wanted to approach it very basically, and start with an SDK that would limit my scope to the fundamentals. I felt it would be distracting if I picked up a VIVE and just went ape with all the possibilities & models of interaction. I wanted to see how much I could accomplish within Cardboards limits, how much my iPhone could handle processing wise (as a benchmark) with a creation straight out of my head... whatever that would be.



I quickly realized that C# is pretty vast, that Unity was pretty powerful, and that I had to learn both. Unity itself came fairly natural to me given my experience with 3d Max, Rhino and the like, but I needed to learn to code (properly). I bit the bullet and dove into the most boring free dev and online coding courses conceived by other humans. I had to spend a couple good weeks just learning the core principals of coding. That pain quickly turned to pleasure though, and I started creating alot of .... random stuff. I spent days creating seemingly arbitrary, random experiments. 

I thought... wow, I'm really good at creating this random stuff! The question remained though, what the hell am I doing? I was inside this sort of improvisational, randomized learning feedback loop, but I felt it wouldn't translate into something useful without some directed structure. I knew I couldn't go through the Unity C# API at random, learn and experiment with every component class I came across, so I decided to narrow my focus to TRANSFORM, INSTANTIATION, COLLIDERS, RIGIDBODIES and the ANIMATION SYSTEM. More than plenty to chew on.




I have an obsession with GIPHY. I see it as a sort of vast catalog of mostly useless and terrible one hit wonder records. I like to mine it for those rare gems though. I also have an obsession with ridiculous animals, all of which I decided would make a great premise for my game. A shooter where you were rocketed into cyberspace, and tasked with feeding the ravenous Cyber Animals of Cyberspace with little nuggets of love and whip cream (keep reading). After learning to integrate the Cardboard SDK into Unity (thank you YouTube). I started to build around this narrative with what I already learned.

Some questions I was asking...

Should the player move?

How do these animals behave? Are there default behaviors?

How does the default behavior of the animals play out against the players?

How should they play out? What does that feel like?

Whats are the mechanics in VR?

Where is the players attention?

Is this fun?


After making some decisions regarding the overall design, and testing how some of the player ammo vs animal gameplay mechanics would play out, I realized I was spending too much time tweaking physics.

When designing a simulation, everything we normally take for granted is abstracted and presented to you as things to be manipulated. Its easy enough to import and develop based on your sense of real world time and physics, but once I realized the potential benefits of choosing to ignore all that, I began to understand the power of VR from a design perpective. Taking ownership over more of the physics takes more time (especially in VR) but the exploration I felt was worth it. The physics of a simulated world, let alone an interactive one, can be manipulated and explored in almost infinite dimensions, there is the potential to go off the deep end or get lost in this process, but the goal for me was to "craft" or tweak the relationships down to something FUN and UNIQUE without getting too wacky. Its like sculpting with numbers.

It became clear that the effects of any departure from real world physics in VR is amplified tenfold because of how immersive it is. This can either payoff big time or backfire big time. Luckily, its easy to test and tweak! And wow, you really have to TEST TEST TEST.




I felt like alot of times the features I  was designing / placing in VR are based on arbitrary decisions or just to test what I call "micro-hypotheses". Example ~ is it disorienting to be shielded by an invisible sphere that would also sometimes act as an alert system? How do I test that situation quickly to see how it feels? Whats the shortest, but most experientially accurate way? Regardless... you make decisions, fail or succeed fast and better options present themselves in the process.

My cyber animals started to take shape. I wanted some insight into character modeling so I decided to create them from scratch with mesh mixer. The characters needed to follow the narrative somehow, their movement and behavior needed to match their personality, but the details often didn't match up with my original vision. Instead, new details would emerge, and I improvised. For example.... coma cat was supposed to be a hungry happy fat cat... instead he turned out more like a vampiric menacing sheep creature. I could have gone back and tweaked him, but I stuck with sheep creature.


My Game object hierarchy started to be built more and more off the player. Despite my desire to build systems /objects into separate hierarchies, things always tended to be easier if built off the player itself. This is effective if you structure things in an abstract enough fashion to allow for changes, and you assign common resources intelligently.  In fear of boring you to death with fancy technical breakdowns. Here are some pretty diagrams ~

Screen Shot 2017-06-28 at 9.19.46 AM.png





Game Dev requires you to be very organized and efficient, especially when it comes to VR. I did a lot of testing on mobile. I loved this game so much I got a strange high squeezing every last drop out of it performance wise. It felt like tuning a hot rod.


STATS   ::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::

Set pass calls ~ 110 avg.

Batches ~ 210 avg.

Tris ~ 22K

FPS ~ 85 - 100



I had ALOT of fun creating this app. Creating it was a great exercise in learning how to conceive, plan for and build a complete VR app. So much of the actual work is in designing the hierarchal relationships between the front end 3D world and the backend that runs this world;

I liken this to sculpting the individual pieces of a 3D puzzle while illustrating the complete photo at the same time. 

I think for anyone starting from scratch in UNITY and C#, I would recommend not to take shortcuts. Take it slow, don't bite off more than you can chew, and take time to zoom out and absorb best practices. It will pay off in the end. Once you begin to see the editor and development environment from a big picture perspective, you will begin to connect the dots. Things will start to "click" and flow a lot easier and you with avoid burnout and frustration. If you take time to grasp the principles, the details will become subservient to your vision vs having your vision being held hostage by the details. This is key, since their are A LOT of details, and one can lose their mind behind some code if not careful :)

Thanks for reading!