Sunday 31 January 2016

Animation Events and Animator Override Controller

Animation evens are really useful when we want something to happen that needs to be triggered at a particular point during the playing of an animation.

In this scene, for example, our character will perform a "shooting fireball like" animation. To make it look good, we need the fireball to be spawn at a specific moment of the animation.
For this little project I used the same character model we used last time found here , the KY Magic Effect Free found here and the Magic Pack animations found here.

I also reused the animator controller we created in the previous post. I will now show you how we can reuse the same controller but with different animations or with new animations added.

Let's drag in the model and do not forget to change the rig to humanoid. Let's rename it Player.

Now we create the animation controller. Since or character will move in the exact same way as the character of our last post, we are going to reuse the same controller. But we want to add one animation to it, so we are going to use an Animator Override Controller. To create one, right click in the Project folder then choose Create -> Animation Override Controller and let's call it PlayerOverrideController. Do not forget to check the Apply Root Motion box.

If we select our controller, we can see that there is one parameter empty that expects a Controller. Here we can place the animator we created in the previous post, which I believe is called Player. We can now see the all the animations we set up last time, the parameters and the blend tree Motion are available for our new controller.(Fig 1).

Fig 1

This allows for a quick reuse of the same state machine, without having to recreate a whole new one to achieve the same thing. This also lets us modify a state machine without touching the original controller.

We can now drag in our controller a new animation. Let's get it from the folder Mixamo then MagicPack -> Animations and it's called Standing_1H_Magic_Attack_03. We can simply drag it and drop it in our animator controller window and that will create a new state. Rename it "Shoot". As we are still in the animator window, let's create a new parameter of type Trigger and call it Shoot.

Too create a transition, right click on the Motion state and select MakeTransition. To pint the arrow to the shooting state, simply click on it. Let's do the same thing but in revers, from the shooting state to our Motion state. (Fig 2).

Fig 2
We want to translate to the shooting animation whenever the parameter Shoot is triggered, and we want to go back to motion when the shooting animation is finished. To set this up, first click on the transition arrow from Motion to Shoot. Uncheck the HasExitTime parameter and, in the Conditions tab below, click the + sign to add our new trigger parameter Shoot. (Fig 3).

Fig 3
Now let's click on the other arrow. Here, we'll leave the HasExitTime box checked and we can leave the default settings as they are for now. However, I tweaked the blending points I little in order to get a better transition (for my opinion). I did that by dragging around the 2 little blue arrows you see in the graph below. (Fig 4).

Fig 4

Before we set up the animation event, we need to create the script that will handle the shooting and a script for the actual "Magic ball" to move an die out. Create a new C# script called Shooting and one called MagicBall.

Let's also set up the MagicBall object: in the KY_Effects folder, go to MagicEffectsPartFree -> prefab and choose ErekiBall2. You can of course pick any one you like. Let's add a RigidBody component, uncheck the UseGravity box. Then add  SphereCollider, adjust it if you will and check isTrigger. We don't really need colliders and rigidbody for this example, but I is good practice to give these components to moving objects. Finally, let's add the MagicBall script and that's it. We no longer need this object in the scene, so we can drag It into the project folder so a prefab will be created, and then we can delete it from the scene.

Here is the MagicBall C# script:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
public class MagicBall : MonoBehaviour {

 Vector3 direction;
 float speed = 10;

 Rigidbody rb;


 // Use this for initialization
 void Start () {

  rb = GetComponent<Rigidbody> ();
  StartCoroutine (die ());
 
 }

 IEnumerator die()
 {
  yield return new WaitForSeconds (5);
  Destroy (this.gameObject);
 }
 
 // Update is called once per frame
 void Update () {
 
 }

 void FixedUpdate()
 {
  rb.MovePosition (rb.transform.position + direction * speed * Time.deltaTime);


 }

 public void setDirection(Vector3 dir)
 {
  direction = dir;
  
 }
}

The direction variable is going to be set later by the Shooting script via the method setDirection(...) (Line 35).  In the fixed update function we simply move the rigidbody of the object accordingly. We also start the corotine die() as soon as the object is created (Line 13), so our Magic Ball will destroy itself after 5 seconds. Now, the magic ball is complete.

Let's create a new empty game object called ShootingPoint. This is going to be an empty game object that will handle the spawning of our ball. Place it as a child of our Player object, and set its transform to X: 0.744,Y: 0.62,Z: 0.511. (Fig 5). This will ensure that the ball will spawn right where the hand of the character model is.

Fig 5
Now let's add to the ShootinPoint object the C# script Shooting. Below is the script itself:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
public class Shooting : MonoBehaviour {

 public GameObject prefab;

 // Use this for initialization
 void Start () {
 
 }
 
 // Update is called once per frame
 void Update () {
 
 }

 public void ShootBall(Vector3 v)
 {
  GameObject ball = Instantiate (prefab, transform.position, Quaternion.identity) as GameObject;

  ball.GetComponent<MagicBall> ().setDirection (v);
  
 }
}

The public game object variable is going to be our MagicBall prefab we created earlier, so DO NOT forget to add the prefab to the public variable in the inspector. (Fig 6)

Fig 6

Then, when the ShootBall method is called, we instantiate a Magic Ball in the scene (line 17), grab the MagicBall script and call the method setDirection and pass it the direction vector (line 19).

Finally, let's create a last C# script called PlayerMovement, which is going to be identical to our last script from the previous post and will only have one additional method called Shoot.




 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
public class PlayerMovement : MonoBehaviour {

 public float dampSpeed;
 public float dampRotation;


 Animator anim;
 Vector3 originalPos;
 Shooting shooting;

 // Use this for initialization
 void Start () {

  anim = GetComponent<Animator> ();
  shooting = GetComponentInChildren<Shooting> ();
  originalPos = transform.position;

 }

 // Update is called once per frame
 void Update () {

  float speed = Input.GetAxis ("Vertical")/2;
  float rot = Input.GetAxis ("Horizontal");

  if (Input.GetKey (KeyCode.Space))
   speed *= 2;

  if (Input.GetKey (KeyCode.R))
   transform.position = originalPos;

  if (Input.GetMouseButtonDown (0))
   anim.SetTrigger ("Shoot");

 

  anim.SetFloat ("Speed", speed,dampSpeed,Time.deltaTime);
  anim.SetFloat ("Rotation", rot,dampRotation,Time.deltaTime);



 }

 public void Shoot()
 {
  shooting.ShootBall (this.transform.forward);
 }
}

We get a reference of the Shooting coming from the child object ShootingPoint (line 9 and 15) an the method Shoot() simply call the ShootBall method of the Shooting script, passing it the transform.forward of the player model object so the magic ball will move towards the direction our character is facing. Also, we detect if there's a mouse click and, if so, the variable Shoot in the animator will be set, triggering the shooting animation (line 32,33).

At this point, we can go on and create the animation event that will call the aforementioned Shoot() method. Select the Player object and go to Window -> Animation. Once on this window, open the animation list on the top left and select Standing_1H_Magic_Attack_Free. by keeping an eye on our character in the editor window, drag the red line forward at around 0.22, as that is the exact moment of the animation at which we want the magic ball to spawn. (Fig 7).

Fig 7
To add an animation event, click on the "stick like" icon on the top left. hat icon will appear exactly where your red line is placed on the time line. By double clicking on that, we can select the function to be called. (Fig 8).

Fig 8
All done! Now everything is set up and our character should move around and shoot magic balls at every click.

One thing to remember: when adding animation events, we can only call void types functions which accept primitive parameters and are part of a script attached to the same game object which is playing the current animation.

Example scene of the final result here.

Friday 29 January 2016

Blend Trees for smooth animations

Blend trees are a great tool that we can use in Unity if we want our characters to have smooth animations.

For this project I will use some of the standard asset, and this model.

Let's start by going in the Mobility_01_Free_v2 folder, FBX and locate the character model (MotusMan_v2). Before we drag it into the scene we need to change its rig. This is simply done by selecting the Rig tab, and change the Animation Type to Humanoid. Then click Apply. (Fig 1)


Fig 1

Now we create an Animator Controller and we assign it to our player. Let's also set the variable Apply Root Motion.

Open the Animator Controller window and create a new blend tree by right clicking and then select Create State -> from New Blendtree (Fig 2).

Fig 2

Now create 2 parameters of type float called Speed and Rotation, change the Blend Type to 2D Freeform Cartesian and assign the newly created float parameters in the fields below. (Fig 3).

Fig 3

Now we can start all the motion fields in the motion tab below. I will use the animations found in the standard asset package, in the folder StandardAssets -> Characters -> ThirdPersonCharacter -> Animation. Remember to drag in only the motion, not the whole prefab.We will need a total of 15 motion fields. You will see in Fig 4 which animations I picked and how I set them up.

Fig 4

Now, I could let the engine compute the values needed for the blending, based on the root motion, but I used another trick instead. Since we are going to feed in the blend tree variables (Rotation and Speed) with the Input.GetAxis("") method, we are only going to et values from 0 to 1.So, this is what I did: for each motion field, the Pos X parameter reresents our rotation. Therefore, I set it to 1 for every animation that involves a sharp rotation to the right, 0.5 for a "less sharp" rotation to the right. Same thing for the left rotation, but with negative values. Then, for the animation that requires no rotation, I set it to 0.

The other parameter, Pos Y, is our speed: I set it to 0 for every animation that has no forward movement, 0.5 for walking animations and 1 for running animations.

Depending on the values we are going  to feed in our blend trees, the engine will blend these animations together accordingly.

Now we can create the script that will pass values to the animator controller. It is actually very simple.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
public class PlayerAnimation : MonoBehaviour {

 public float dampSpeed;
 public float dampRotation;

 public Slider speedSlider;
 public Slider rotatioSlider;

 Animator anim;

 // Use this for initialization
 void Start () {

  anim = GetComponent<Animator> ();
 
 }
 
 // Update is called once per frame
 void Update () {

  float speed = Input.GetAxis ("Vertical")/2;
  float rot = Input.GetAxis ("Horizontal");

  if (Input.GetKey (KeyCode.Space))
   speed *= 2;

  dampSpeed = speedSlider.value;
  dampRotation = rotatioSlider.value;

  anim.SetFloat ("Speed", speed,dampSpeed,Time.deltaTime);
  anim.SetFloat ("Rotation", rot,dampRotation,Time.deltaTime);
 
 }
}

In this script we simply get the animator component (Line 14), and we pass it in the values of speed and rotation from the keyboard (Line 30,31). Notice how on line 21 I divide by half the value obtained, so we can only get a maximum of 0.5 (Input.GetAxis() returns a value from 0 to 1).
By doing so, we will assign a 0.5 to the Speed variable of the animator, triggering the walking animation. Then, if we press space (Line 24,25), I double the value of speed, bringing it to 1 and, therefore, triggering the run animation. A pretty cheap trick, but gets the job done.

Additionally, we apply damping to our animations. Damping is a way to smooth out the values passed to the animator. You can experiment with it in the example scene, as you can see the values for damping come from 2 sliders that you can modify while playing the scene so you will have a good idea of what damping is for.

At this point our character is ready and will move around our empty scene freely and in a very "smooth" way.

Example scene here.

Thursday 28 January 2016

Navigation and click-to-move

Navigation is one of the awesome features of the engine. It provides a sort of basic AI for your objects which allows them to navigate the scene autonomously avoiding obstacles.

Let's see how we can achieve that, by making move a character on a plane using the mouse and making sure he doesn't try to run through walls.

For this simple projects I used some of the assets found in the StandardAsset by UnityTechnologies, downloadable for free on the Asset Store.

Let's start by placing our floor in the scene by draggin the prefab FloorPrototype64x01x64. For simplicity, we will rename it Floor. Let's set its layer mask to a new mask called Ground. In order to use the navigaition system, we need to tell Unity that this object is Static. More specifically, Navigation Static. We can achieve this by simply click on the Static label on the top right (Fig 1).

Fig 1


We can now place some obstacles in the scene, can can either drag in some prefabs from the standard asset or simply create cubes. Let's also set them to static. Finally, let's drag in the character model from the folder Characters -> ThirdPersonCharacters -> Prefabs ->ThirdPersonController.

My scene looks like this:



Now, it is time to bake it. Yes, bake it.

In order to get the navigation system to work we have to bake the scene. To open the navigation window we go to Window -> Navigation. (Fig 2)


Fig 2

Once the navigation window opens, we select the bake tab. (Fig 3)

Fig  3

Now, I could go on and explain all the parameters in the window, but I really believe that the best way to understand what they do is simply to play around with them. For the moment we leave them as they are, we proceed by pressing the button Bake, placed to the bottom right of the window. (Fig 4)


At this point magic will happen.

In the editor, the scene will look like this:


The blue area represents the "walkable" part of the scene, while the "sort of greyd out" parts are considered obstacles. During the baking process, Unity will check all the objects in the scene which are tagged Static and will make them obstacles and, therefore, to be avoided by any NavMeshagent navigating the scene.

We can now remove the 2 scripts attached to our player character (ThirdPersonUserControl and ThirdPersonCharacter) as we are going to write a new script to move the character via mouse clicking. Also, we need to add the component NavMeshAgent to our model, which is the component that will drive our player through the scene.

Let's create a new C# script and call it PlayerMovement. I will noe show you the code:



 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
public class PlayerMovement : MonoBehaviour {
 
 Animator anim;
 NavMeshAgent nma;

 Ray ray;
 RaycastHit hit;
 float speed = 0.5f;
 // Use this for initialization
 void Start () {

  anim = GetComponent<Animator> ();
  
  nma = GetComponent<NavMeshAgent> ();
  nma.speed = speed;
 
 }
 
 // Update is called once per frame
 void Update () {

  ray = Camera.main.ScreenPointToRay (Input.mousePosition); 
  RayCastFromCamera ();
  
  Vector3 proj = Vector3.Project (nma.desiredVelocity, transform.forward);
 
  anim.SetFloat ("Forward", proj.magnitude);
 
 }

 void RayCastFromCamera()
 {

  if (Physics.Raycast (ray, out hit, 500, LayerMask.GetMask ("Ground"))) {

   if (Input.GetMouseButtonDown (0)) {

    nma.SetDestination (hit.point);
    
    nma.Resume ();

   }

  }
 }
}

Le's break it down.

We get all our component references (Animator and NavMeshAgent).

We then create a Ray variable and a RayCastHit variable. If you are not sure how to use Raycasting check the unity official documentation. I might make a tutorial about it if required.

In the Start method we initialize all our variables.

In the Update method, we perform a Raycast to check where in the scene world we are pointing with the mouse, based on the ray created from the camera to the world space (Line 22).

If we hit anything that is on the LayerMask Ground (Line 34) and we click the left mouse button (Line 36), we pass to the NavMeshAget the collision point.

Now, for this particular example the animation might behave strangley and that is because I'm not setting the rotation animation in order to keep the script simple. If you are interested, please let me know, I will make a tutorial on it.

The character movement is controlled by the animator, and its speed is determined by the animation speed (we are using RootMotion). Therefore, the higher value we feed into the Forward variable of the animator, the fastest the character will run. To get a smooth movement, we use Vector Projection.

In simple words, by calculating the magnitude of the Vector Projection of the NavMeshAgent desired velocity and the character transform forward we get a value for the speed that goes from 0 (when the NavMeshAgent destination is very close to the agent itseld) to its maximum possible value (the NavMeshAgent speed variable, in our case, 0.5). I set the speed to 0.5 so we always get a walking animation as the running animation can create complication unless we adjust other parameters (like animation rotation). That is simply because the Animator is using blend trees and needs extra parameters to allow for smooth animations. I will make a tutorial eventually on how to achieve that using this technique.

We calculate the projection using the NavMeshAgent desired velocity, as it is the velocity the agent wants to head owards (therefore, towards the destination) and it will account for obstacle avoidance. If we had used NavMeshAgent.Destination we would se the character attempting to walk towards the target ignoring obstacles on the path.

By the way, if you would like to know more about Vector Projection and NavMeshAgent movement I strongly suggest you to go and check the "Stealth" complete project tutorial on the official Unity website.

However, for our purpose this is enough. We now have a character that will move in the direction of our clicking point and avoiding obstacles.

A few things to remember:

- Every time we place or remove obstacles in the scene, we need to rebake the scene.
- It is important to mark as Static all the objects that we want to be avoided by our character in the scene
- It is possible to create "bridges" to walk over areas that are not navigable using OfMeshLink. (a more in depth topic for another tutorial).
- Notice how, when you play the scene, if we click outside the "level", the character will get to the closest point and then stop. There's a way to avoid tha by using the method hasPath() of the NavMeshAgent. If the method returns a false, we ca just set the speed to 0 or stop the agent so the character will not move.

Well, that's about it.

An example scene here. I took advantage of the Stecil buffer shaders I wrote in the previous posts, so we will be able to see the character behind the walls. To make it stand out, the player model is rendered in a single color, in this case, green.

Wednesday 27 January 2016

Moving platform in 2D


It often happens in 2D action games, to encounter the (in)famous moving platforms, likely to be harder to jump on than actually defeating the final boss.

I would like to show you a method I use when it comes to create moving platform that allows an easy placing of the platform actual travel path.

First of all, we create an empty game object called MovingPlatform. We then drag in whatever sprite we have for the platform and we child it to the MovingPlatform object we just created. Finally, we create another empty object, called Points, which will also be a child of MovingPlatform.

Your hierarchy should look like this:


The Points object is simply an empty object that acts as a parent object for all the child objects we will use as navigation points for the platform.

We can now create a few empty objects (which we cal call P1,P2,P3,...) and place the as children of Points. In order to see these empty objects in the scene, we give them an icon (Fig  1).

Fig 1
Our Platform object should now have only a Sprite renderer component. Let's add a Rigibody2D and set it to isKinematic.


Now, we create two C# script: one called MovingPlatform and the other one PointsHolder.

Let's start with the simpler one: PointsHolder. The sole purpose of this script is to count all the children of the Points object and return an array of Transform containing the children objects.

Here is the code:

public class PointsHolder : MonoBehaviour {

 Transform[] allPoints;

 // Use this for initialization
 void Awake () {

  allPoints = new Transform[transform.childCount];
 
 }
 
  public Transform[] getAllPoints()
 {

  for (int i=0; i<allPoints.Length; i++) {
   allPoints[i] = transform.GetChild(i);
  }


  return allPoints;
 }

Now, the other script, MovingPlatform
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
public class MovingPlatform : MonoBehaviour {

 public enum Modes
 {
  PING_PONG,
  LOOP
 }

 public Modes mode;

 public bool isMoving;
 public float speed;
 PointsHolder pointsHolder;
 Transform[] allPoints;
 
 int direction = 1;

 void Start()
 {
  pointsHolder = transform.parent.FindChild ("Points").GetComponent<PointsHolder> ();
                rb2d = GetComponent<RigidBody2D>();
  allPoints = pointsHolder.getAllPoints ();
 

  StartCoroutine (moveAround ());
 }

 IEnumerator moveAround()
 {
  transform.position = allPoints [0].position;

  int index = 1;

  while (isMoving) {

    while(rb2d.transform.position != allPoints[index].position)
   {
     rb2d.transform.position = Vector2.MoveTowards(rb2d.transform.position,allPoints[index].position,speed*Time.deltaTime);
    yield return null;
   }

   if(mode == Modes.PING_PONG)
    if(index == allPoints.Length-1 || index == 0)
     direction *=-1;

   if(mode == Modes.LOOP)
    if(index == allPoints.Length-1)
     index = -1;

   index+= direction;
   yield return null;
  }
 }

 
}

Let's break this one down.

First, I declare an enumeration which will determine the moving "mode" mode of the platform: PINGPONG means that the platform will travel backwards once it has reached the last point, while in LOOP  mode i will go back to the first point and iterate all the points again.

After declaring other needed variables, at line 23 we get the array of Transform from the PointsHolder script we wrote earlier. At his point, the coroutine can start (Line 35).

This coroutine is our moving function.It will start teleporting the platform to the first point (Line 40) and then it will iterate inside the array of Transform coming from the sibling object Points and the script PointsHolder, constantly moving the platform toward the next available Transform (Line 48). According to the "Moving mode" dictated by the enumeration Modes, the index for the array will be changed accordingly (Line 52 - 58).

At this point we can start creating empty objects and placed them as children of the parent object Points, and at the start of the scene the Platform object will begin it's movement, following the points one by one.

Here's a short video of the final result:




I will perhaps discuss how to get the character to jump on it and react properly (no falling through or sliding off) sometime later on, in another post. For now, let's enjoy our moving platform.

Parallaxing

Parallaxing.

According to the Wikipedia definition, parallax scrolling is a technique in computer graphics and web design, where background images move by the camera slower than foreground images, creating an illusion of depth in a 2D scene and adding to the immersion.

Well, it's pretty self-explanatory.

There are different ways in which we can achieve parallaxing in Unity. I'm going to show you my favourite approach, which involves using two cameras.

First thing first. We need to create a second camera. To make it easy, we can duplicate the camera in the scene. DO NOT FORGET to remove the Audio Listener component from one of the cameras or Unity will start screaming at you. One camera is set to Orthographic, the other one to Perspective. Then, we create an empty game object and we child both cameras to it. Both cameras must be set to Position 0,0,0, so they will be perfectly aligned with the parent object (Fig 1 - Fig 2).

Fig 1
Fig 2

The orthographic camera also needs additional settings. First of all we change the Clear Flag to Depth Only (Fig 4) and we also need to create a new Layer, which we will call Background (Fig 3) and modify the culling of the camera by removing the Background layer (Fig 4).
Fig 3

Fig 4
The perspective camera needs also more settings: the culling mask for this component must be set to Background only (Fig 5). In addition,  the depth value must be set to a number that needs to be of less value that that of the orthographic camera. (Fig 5). This depth value works like the sorting layer of the sprite renderer, the camera which has the lesser value will render first. In other words, our perspective camera will always draw behind the orthographic camera.

Fig 5
I use 2 sprites for the background, 2 different "types of mountain". And, please have mercy, I actually drew these ones.



By placing each one of them next to a copy of itself we can create a sort of chain.


We place both chains in two different empty object, so we can then adjust the Z value by simply moving the parent object (Fig 6, Fig 7). A very important thing to remember is to change the layer for the mountain sprites to Background, so they will be rendered only by the perspective camera. Also, the sorting layer. Even though we are working with a perspective camera, sorting layers still matter when using sprites, therefore, the chain of mountain that we want to keep in the front (in our case, the "brown one", needs to have all its sprites with a sorting layer higher then the other mountain chain). By doing so, we make sure that the front mountain chain is always drawn on top of the other one. (Fig 6, Fig 7).

Fig 6

Fig 7
At this point we play around with the Z value and the scaling of the 2 parent objects of the mountain chains (BackChain - FrontChain in Fig 6 and Fig 7) until we are satisfied with the result. It is important to notice that we should keep watching the Game window to observe the result, as that is where we'll see the perspective camera creating the depth effect. In the end we should end up with something like this:


Well, that is pretty much it. If we want to add more background scrolling objects, all we have to do is to assign them to the layer Background, adjust the sorting layer of the sprite renderer and place them in the scene. All the background object will be rendered by the perspective camera, which takes in consideration the Z value of the transform, giving a sense of depth. The rest of the level, like interactable objects and units (chests, enemies) can be placed on other layers and be rendered by the orthographic camera and be on the same "plane" as the player.

Example scene here.


Character sprite from: https://www.assetstore.unity3d.com/en/#!/content/42731


Stencil Buffer....for 2D

This is a sort of follow up of the previous post.

Here we achieve the same effect, this time using sprites in a 2D environment.

When the character is behinds an object, it will be drawn in a single color, giving the impression that is placed behind that object.

Here is a picture of the effect:



The approach is the same: one shader for the tree, which works as a mask, and one for the character.

The shader mask:

//  previous code, properties ecc...

  Tags
        { 
            "Queue"="Transparent-10" 
            "IgnoreProjector"="True" 
            "RenderType"="Transparent" 
            "PreviewType"="Plane"
       
        }

        Cull Off    
        ZWrite Off
      
        Blend One OneMinusSrcAlpha

        Pass
        {

        Stencil
        {
        Ref 4
        Comp Always
        Pass replace
        }

// rest of the code for normal rendering...

The idea is the same awe have seen it in the previous post, the stencil buffer is given a value of 4 for each pass (draw call).

Then, we have the shader for the character. Just like the shader for the 3d model of the previous post, this shader consists of two passes: the first one returns a single color (_MaskedColor) whenever the sprite is drawn on those pixels which have the stencil buffer with the value 4 (Ref 4). In other words, whenever they are in contact with the tree.


//  previous code, properties ecc...

  Tags
        { 
            "Queue"="Transparent" 
            "IgnoreProjector"="True" 
            "RenderType"="Transparent" 
            "PreviewType"="Plane"
  
        }

        Cull Off     
        ZWrite Off
    
        Blend One OneMinusSrcAlpha

        Pass
        {

        ZWrite on
        Stencil
        {
        Ref 4
        Comp Equal
     
        }

// rest of the code, vertex funcion ecc...

 float4 frag(v2f v) : COLOR
            {
              return _MaskedColor;

            }

//rest of the code....

The second pass will draw the sprite normally, but we specify to do so when the stencil buffer is NOT 4 (Comp notequal).

//  previous code, properties ecc...

   Pass
        {

        Stencil
        {
        Ref 4
   Comp notequal
        }

// rest of the code, vertex funcion ecc...

 fixed4 frag(v2f  v) : SV_Target
            {
                fixed4 c = tex2D(_MainTex, v.texcoord) * v.color;
                c.rgb *= c.a;
                return c;

            }

//rest of the code....

The fragment function returns the texture and whatever color we assign to the variable _Color, just like your standard sprite shader.

A few important things to remember:

- In this particular case, the character is made of different sprites and it is not animated with sprite sheet. That means that every single sprite renderer of the objects that compose the character need a material with the shader we wrote.

- This method will work whenever the sorting layer of the masking object (in this case, the trees) are at a lower value compared to the character sorting layer. I'm currently looking to find a method that allows this effect to work regardless the sorting layer of the sprites involved. If you happen to know a good solution, please, drop a line down below :).

Example scene here.



Character sprite from: https://www.assetstore.unity3d.com/en/#!/content/42731

Monday 25 January 2016

Stencil Buffer Shader for special effects

The goal is to be able to see objects behind other specific object. In this little project in particular, I wanted to be able to see my character when it was behind some wall, in order to be able to move it in the direction I wanted him to.

This is an image of the effect:


In order to achieve this, we need to use two shaders: one for the walls, one for the character.

The purpose of the shader applied to the walls is to allow the special effect to work only on certain objects (walls), just in case I wanted some walls to completely obstruct the view.

Both shaders are fairly simple, they react to light (multiple lights as well), however, they don't cast shadows.

The walls shader simply uses a stencil buffer, like so:

Pass
        {

        Stencil
        {
        Ref 4
        Comp always
        Pass replace
        }


// rest of the code for rendering

}

This lets the gpu know the all the pixels used to draw the object with this shader are given the value 4 in the stencil buffer.

Then, we have the character shader. I wrote two draw calls: one that renders the character normally and the other that renders a single color, which is the white color we see when the character is behind the wall.

The first pass goes like this:
    Pass
        {
        Tags{"Queue" = "Geometry" "RenderType"="Opaque"}

        ZWrite off
        ZTest Always
        Blend One OneMinusSrcAlpha

       
 Stencil
        {
        Ref 4
        Comp equal
        }




fixed4 frag (v2f i) : SV_Target
            {
                
// sample the texture

                
// apply fog

                return _MaskedColor;
     
       }


}

This always draws a white color (or whatever color the variable _MaskedColor is set to) for each pixel of the character model. Notice how the fragment function simply returns a color
(_MaskedColor) which, in our case, is white. It is possible change the color in the inspector of the material.

The second pass:

        Pass
        {
        Tags{"LightMode" = "ForwardBase"}
        ZWrite on
        ZTest LEqual
       
// rest of the code for texture and lighting rendering
        }


Basically, the gpu will render the texture and lighting color whenever the object is in front of other object (ZTest LEqual), overriding the white color of the previous pass.

It is very important to respect the order of the passes.

You can try the effect in the scene here.













Random 2D room generator

This is a small project I developed in Unity, the popular game engine.

It's a C# script I wrote that generates random rooms in a 2D environment.

this is what the script looks like in the inspector:


It generates the number of room required (if there's enough "room"). It will also create corridors to allow full navigation of the generated map.

By tweaking the parameters we can generate different sizes and general shapes of rooms.


Here are some screenshots of some of the maps generated:








If you are interested in the code, leave a message, I will be happy to provide it.