Category Archives: unity3d

Windows to iOS using Unity Cloud Build

So this sprint my goal was just to get the game running on iOS.  Since I don’t have a mac I thought this was going to take a ton of time or even be impossible.  I spent a little bit of time googling around and there was a lot of despair about it and the only accepted answers were to use a virtual machine.  I was not into that idea.  Thankfully I did not have to tread there because I found this.

With a few slight modifications I was able to get all of the required files to create a mobile provision and signing certificate.  I plugged all of that into Unity Cloud Build and I was able to get a build of my game onto my phone, all without a mac.  Ok, so that was easy, but I quickly ran into some problems.

First Problem: Unity Cloud Build does not import Maya .mb files

First of all, none of the characters appeared on screen.  I had been just saving .mb files from Maya and using them as the basis for my content.  This works fine on the PC.  It turns out that Unity Cloud Build does not support this.  Unity uses Maya (and its FBX SDK) in the background to silently turn your .mb files into .fbx files apparently, and Unity Cloud Build won’t do that for you.  Makes sense.  I just did not anticipate it.  So what did this mean?  I had to open and re-export all of my .mb files to .fbx, remake prefabs based on them, fix up materials, and reconnect any references to the new prefabs.  Thankfully I had less than 20 .mb files across the two models I have been working with so far so this wasn’t that bad. This could have been a production nightmare if I found this out at the end.

Second Problem: OnGUI slows even the simplest game to a CRAWL on iOS

All of this took only a couple days so I threw a bunch of touch input work into this sprint and got busy.  First I added simple taps and then I added swipe gestures.  Ultimately the swipe gestures did not work out for what I’m trying to do.  I really liked how the swipes worked in Tiny Rogue but that game takes place in a single screen and the pace is much slower than what I am going for, so it ended up being frustrating to swipe repeatedly to move quickly over longer distances.  Anyways, it quickly became evident that the frame rate was not great.  As I was googling around about input handling on iOS I came across threads talking about how terrible performance on iOS is when using OnGUI and UnityEvents.  Shiiiiiiiiiit.  I’m doing that.  The whole architecture I am using is based on polling for events in OnGUI, processing them in Update, and rendering the camera into a texture and blitting that after doing a yield return new WaitForEndOfFrame().  I may need to rethink this.

I decided to put that on the shelf–I heard premature optimization is the devil.  The last thing I did this sprint was to add a DPad control that only shows up on iOS and get tap inputs working which move the character around.

Progress vid:

And, here is this sprint’s Task Board

Next sprint I am finally going to tackle tile layers, both dynamically created tiles and preauthored levels.

Sprints!

Regular, meaningful progress is hard to make when you have a full time job and a family. I figured a public blog would help compel me to do it, but it hasn’t worked. So now I am trying to add some structure, and make that public too. I found some open source scrum called Taiga and I setup a backlog and a few sprints of work.
The first sprint, which took place over the past two weeks, was a success –I finished all the things I wanted to. It was about completing a basic set of hero movement content and death, and the (what I thought was a little bit) of code to tie it together. Since I am using 3D models to represent 2D art, to get the character to face different directions it is actually a model swap rather than a sprite swap. In Unity this means switching to a different skeleton and animation controller.
It feels good to queue up a set of tasks to do over two weeks which are a small piece of a bigger puzzle and at the end, you’ve learned something about that puzzle. I totally changed what I was going to work on in the next sprint after move the needle ever so slightly in a different direction and learned just a bit more about what I’m making.

Conversations about cost vs. value add for players

I can’t count the times I have been party to a discussion between programmers and content creators that goes something like this:


Artist: If we do it that way, we’ll have to hard-code the information directly into our content and it will mean we have to create and maintain like four times as many files…

Programmer: If we don’t do it that way, we’ll have to code a whole new system to handle it dynamically and it will cost performance, memory, code maintenance, and bug tail…

Tech Artist: I could write a script that generates the files for us. Everybody wins, right?

Let’s ignore the Tech Artist for now. Both the Artist and the Programmer have valid arguments from their points of view. Each is staring at a potential pile of dirty laundry. If the artist has to maintain four times as many files, then imagine how much beautiful content they aren’t creating instead. If the programmer has to spend time coding and maintaining another system, then imagine how many other feature requests must be rejected…you get the idea.  
The problem with this line of discussion is that it is too focused on how much work one has to do and how that work will reduce the potential cool stuff for players that could be done. An interesting thing happens when you are both the programmer and the artist for the project. Since it is you coding the system or maintaining the files either way, it starts to make very little sense to take either position above as a good reason to do one versus the other.
This image is what brought all of this to mind. To get some trees to show up, I wrote zero code and just added trees to the random list of tiles I’m populating the world with. It turns out that the trees have transparency around the edges. To get it to look right, I could composite the tree and the grass in photoshop and be done with it. But what if I want trees on top of dirt? Then I need two tree images instead of one. What if I want it on grass, dirt, and something else –now I need three. Okay, so what if I composite the tree on top of grass through code? I started thinking about how much work it would be to support tile layers and it was a decent amount of work.  Again, one problem with these lines of thinking is that they are too focused on the cost.  I started thinking instead about whether of these approaches had any bearing on the player experience. If I coded the tile layer solution, that could afford me the ability to remove the trees easily and have the grass remain. If the trees are easily removable, then the player can chop them down, or they can catch fire and burn away. If they are used for cover from projectiles, this could be interesting. Once I began thinking this way, I was much more comfortable thinking about the cost. If I know I never want any of those mechanics, and I am really only going to have 3 types of terrain + trees, then it is an easy choice to just combine them in photoshop.

Ultimately my point in all this is that it is important to consider any potential value adds for a player experience in cases where the discussion to take one approach over another is centered on the costs and the result would look the same either way. You might just discover a fun mechanic or maybe even a differentiating gameplay hook.

Animation test, finally!

So in the last post I covered the easy model creation pipeline I was working on. Since then, I solved the final piece of that puzzle which is to automatically generate the rig. All of that is rather boring so I’ll save it for another day. Today, I want to show the point of all that, which is to animate! I got two animations setup, east idle and east run. Check it out!

I’ll fill out the north, south and west (maybe mirror east?) and then post some more videos.

Easy model creation

In the skeletal animation experiment I created the model by hand, placing one polygon per pixel using an image plane as a template and then manually setting the UVs to match up with the sprite. That was way to tedious and definitely not scalable. I wrote a handy little python function that will take a sprite as input and do all the work for me :-). Check it!

I think I’ll still need to create the “rig” manually. There will be one unique rig per direction, with east/west potentially just being a mirror. After I create one decent quality animation set and I automate the rigging process then I am confident that this will be my pipeline.

Skeletal animation experiment

One of the next things I want to figure out is how I’ll handle the character animations. I’ve been using the tiny dungeon sprites from Oryx Design Lab, and I had been thinking that skeletal animation might help bring them to life in an interesting way. I didn’t quite know how I would do this so I did some research and there are a ton of options to try out, from Creature to Spine2D, to Maya, and even Unity itself supports animation from within the editor. Here is my first test:

I brought the 16×16 sprite into Maya and slapped it onto an image plane, scaling it up 100 times until each pixel was 1 grid unit, and then I made a 1×1 polygon plane for each pixel. I ended up using a planar projection and using the sprite as a texture, but I think it may be more efficient to either use vertex colors or a tiny 1D texture that has the 8 colors the sprite has in it.

This animation is just a simple waving idle, so I want to do another test with more action like a run or an attack. I like where it is going so I want to see if I can take it to the limit as well, with FX and everything. Then I want to see if I can streamline the setup process by automating the manual steps I did to turn the pixels into polygons. Cheers!

Movement and dodging experiment

The game has reached a mini-milestone. All of the parts I need to test movement and dodging projectiles are finished. Like many Roguelike games, each move is a turn and all characters take turns simultaneously, but at different rates. What I wanted to test here is the turn based movement of the Player and the continuous movement of projectiles. The combination of these requires the Player to move in order to dodge.
I’ll be throwing builds into my public Dropbox folder if anyone wants to playtest and give feedback. Next up are the damage system and I want to try another experiment, which is the use of skeletal animation.

Yet unnamed Roguelike

While my wife was at GDC I began work on a Roguelike game in Unity.  I have limited time, so progress has been slow but steady.  I’m currently using art from Oryx Design Lab (http://ift.tt/1iD1s45). I’m close to the point where I can playtest the most basic mechanics.  I’ve definitely suffered from the ‘start a lot of projects that don’t get finished’ problem in the past and I am hoping that keeping a devlog will help push me to ship this game by making my progress public.
Honestly I don’t have a super strong ‘seed’ that everything is growing from.  I am definitely figuring things out as I go.  My current thinking is that the game will have these elements
  1. Movement is turn based like most roguelikes
  2. Projectile movement is continuous, so you’ll have to move to dodge them
  3. I’ll have a pixelated art style, probably 16×16 for most characters and environment tiles, however the characters will use skeletal animation
  4. It will be the inverse of Rogue Legacy in that, you venture outside of a castle into a new environment after each death
  5. There will be weather and survival elements like Unreal World, but less so.  
  6. Your progress is measured in how many days you survive, like in Neo Scavenger
I don’t know much more than that right now.  I’m very close to being able to test #2 from above
This blurb and this image are what sparked the whole thing:
Game takes place across a series of ‘single screen’ battles, al la OG Zelda.
As you move to the edge of the screen, the camera lerps to the next screen
     crazy town idea–it is multiplayer, you get matchmade with players in the next screen
Turn based, roguelike style movement of characters
Projectiles can be dodged in real-time
reverse of rogue legacy–where instead of going into a randomly generated castle, you exit your home into the randomly generated forest
     you only get to keep what you bring back?
     you only get to keep ‘special’ items a la enchanted cave 2
Here is my current todo list
Stay tuned for more updates!

Unity3D Extension method example

Extension methods are awesome for extending the functionality of existing classes. What is really cool is that if you use an IDE that supports autocompletion, your extension methods will show up when using the classes you extend.

For this example, let’s suppose that you want to extend Unity’s AnimationClip to support the idea of adding an “OnAnimationEnd” event. When the animation clip ends, a callback will be called.

using UnityEngine;

public static class AnimationClipExtensions
{
    public static void AddOnAnimationEndEvent (this AnimationClip animationClip, string onAnimationEndCallbackName)
    {
        AddOnAnimationEndEvent (animationClip, onAnimationEndCallbackName, 0, 0.0f, "", null);
    }

    public static void AddOnAnimationEndEvent (this AnimationClip animationClip, string onAnimationEndCallbackName, int intParameter)
    {
        AddOnAnimationEndEvent (animationClip, onAnimationEndCallbackName, intParameter, 0.0f, "", null);
    }

    public static void AddOnAnimationEndEvent (this AnimationClip animationClip, string onAnimationEndCallbackName, float floatParameter)
    {
        AddOnAnimationEndEvent (animationClip, onAnimationEndCallbackName, 0, floatParameter, "", null);
    }

    public static void AddOnAnimationEndEvent (this AnimationClip animationClip, string onAnimationEndCallbackName, string stringParameter)
    {
        AddOnAnimationEndEvent (animationClip, onAnimationEndCallbackName, 0, 0.0f, stringParameter, null);
    }

    public static void AddOnAnimationEndEvent (this AnimationClip animationClip, string onAnimationEndCallbackName, Object objectReferenceParameter)
    {
        AddOnAnimationEndEvent (animationClip, onAnimationEndCallbackName, 0, 0.0f, "", objectReferenceParameter);
    }

    private static void AddOnAnimationEndEvent (this AnimationClip animationClip, string onAnimationEndCallbackName, int intParameter, float floatParameter, string stringParameter, Object objectReferenceParameter)
    {
        AnimationEvent animEvent = new AnimationEvent ();
        animEvent.time = animationClip.length;
        animEvent.functionName = onAnimationEndCallbackName;
        animEvent.intParameter = intParameter;
        animEvent.floatParameter = floatParameter;
        animEvent.stringParameter = stringParameter;
        animEvent.objectReferenceParameter = objectReferenceParameter;
        animEvent.messageOptions = SendMessageOptions.DontRequireReceiver;
        animationClip.AddEvent (animEvent);
    }
}

Now that we have a few extension methods you can test it out in a component.

using UnityEngine;
using Assets.Scripts.Util;

public class AnimClipExtensionsTest : MonoBehaviour
{
    public AnimationClip testClip;

    void Start()
    {
        Debug.Assert(testClip != null);
        testClip.AddOnAnimationEndEvent("OnAnimationEnd");
        
        var animationComponent = GetComponent<Animation>();

        Debug.Assert(animationComponent != null);

        animationComponent.AddClip(testClip, "testClip");
        animationComponent.Play("testClip");
    }

    void OnAnimationEnd()
    {
        Debug.Log("OnAnimationEnd() was called!");
    }
}