SyntaxHighlighter

Saturday 17 December 2011

2D Lighting System tutorial series: Part 1


2D Lighting system tutorial series

Welcome to the first in a series of blog posts that are going to walk you through the creation of a fully featured 2d lighting system to use in you games. The system focuses on creating soft lighting and shadows that can be cast by any arbitrary sprite in your scene.
For a preview of the kind of system we'll be creating, have a look at the following video:


The video quality isn't great as I only have the free version of fraps to record with, but hopefully it's given you an idea of what we are working towards.

For those of you who are already comfortable with shaders and digging around in other people's source code, I've uploaded the full source to both the lighting system and a basic sample that creates 2 different colored spotlights and lets you move them around the scene:


For the rest of you, in this post I'm going to set out what we'll be covering in the rest of the series.

The series will be split into 9 (fingers crossed!) parts:

1) Introduction, and series contents.

2) Introduction to shaders and the LightBlend shader.

3) Structure of the lighting system, overview of the algorithm, and the LightRenderer class.

4) Point lights: 'Unwrapping' the shadow casters, and creating the occlusion map.

5) Point lights: Lighting the scene.

6) Blurring Creating soft shadows.

7) Spotlights: 'Unwrapping' the shadow casters part 2.

8) Spotlights: Lighting the scene and soft shadows.

9) Conclusion: Optimisations for the future.

I'll try and keep each part to a manageable length, but there is a lot of material to cover, so they may be a couple of long posts along the way!

Tumbleweed

Well, what a surprise! Yet another massive hiatus in updating my blog.

For the last few months I've been in something of a rut. I started designing maps for the-game-formally-known-as-Sphero, and found myself lacking motivation. I was clear in the experience I wanted to player to have, but I was struggling to convert that into what I thought would be compelling puzzles and an interesting environment to traverse. I'd also stopped programming, which is the bit of game development that I enjoy more than anything else. 

As a break I decided to work on a side project. I started off by starting work on a 2D lighting engine, based very loosely on the same principle as the one described by @CatalinZima here: http://www.catalinzima.com/2010/07/my-technique-for-the-shader-based-dynamic-2d-shadows/ 

I had some innovative ideas to improve the approach, and I thought it would be a good short break from designing. Also, I've always wanted to play around with shaders and write some from scratch based on my own ideas, so this seemed a great place to start. I also thought it would give me the coding fix I needed, and let me recharge my batteries for a fresh run at designing Sphero levels. 

Unfortunately, I encountered a few tricky hurdles, and ended up with some very stubborn bugs in my shader code that left me stranded once more, mid-project, with no momentum. Rather than persevere, as I should have, I let myself get distracted by Ludum Dare.

For those of you who don't know, Ludum Dare is a quarterly, 48 hour game development competition. There are no prizes, just prestige. Participants vote on the theme in the run up to the competition, and the theme is announced at the start of the competition. I'd been toying with a new, very simple, mobile game idea for a while, and this seemed like the perfect opportunity to prototype it, if the theme fit. The theme that particularly fit with my idea was 'Escape'. So I up-voted escape and decided that I would only compete if that ended up being the theme. I'm not sure whether I expected it or not, but when I awoke on the morning of the competition, low and behold, the chosen theme was 'Escape'.

So I stuck to my guns and dived head first into creating the game idea that had been brewing in my head for a week or so. The premise was simple. You have balls falling through space, and by tapping on the screen you create small black holes, the gravity of which alters the ball's path, the goal being to alter their path so that they hit an 'exit' portal. When all of the balls have been deposited in the exit portal, the player has passed the level. Any balls that get sucked into one of the black holes reappear at the top of the screen again. 

In my head, this seemed like a pretty simple game. I decided to use XNA and Farseer for Ludum Dare, with the intention of porting to IOS later if I liked the game. 

Even using APIs I was familiar with, I failed miserably to create anything playable in the time limit. There were other distractions, but the main reason was that I completely underestimated how difficult it would be to code everything from scratch. 

Despite this, I still thought that the game had potential, so I carried on with it after the competition. Some readers may have noticed a pattern emerging at this point, but I won't spoil it for those that haven't figured it out yet!

I ended up spending over 2 weeks creating something approaching a basic playable prototype. It took a lot of tweaking of physics values to create a system that let the player actually influence the path of the balls with any degree of control. But eventually I got there. There was a lot more that I had planned for it, but the core of the game was there.

And it was BORING! I realised very quickly that there was no way I was going to come up with an interesting game based on this mechanic. It was almost impossible to come up with level designs where the solution wasn't immediately obvious, and then it became a game of trial and error playing with the placement of blackholes to get them just right to solve the puzzle. In order to come up with interesting levels they would have had to be massively more complex, at which point they no longer leant themselves to the limited screen space of a mobile device. 

Deflated, but determined not to stall yet again, I ploughed on half-heartedly trying to implement another feature, portals, which I thought might make the game a bit more interesting. 

Then came the Microsoft Build conference, which was the last time I posted to this blog. Microsoft released the developer build of Windows 8, and the XNA community immediately noticed that something was missing: XNA! 

There was a lot of noise about XNA being dead, which led me to write my last post on here. Incidentally, my confidence in the future of XNA has actually slipped further in the months since, but that's for another time. In light of the fact that, at least initially, it was clear that XNA games weren't going to be a viable option in Metro style apps, and therefore the Windows marketplace, I decided that there was a gap in the market for a new API. 

For a while I'd been toying with the idea of throwing in my current day-job/ career when I finish my part-time CS MSc course and trying to get a programming job in an established game studio. For that I need real-world C++ coding experience, and I'd been looking for a chance to use C++ in a project. 

These two events collided, and I decided I could kill two birds with one stone by writing a C++ version of the XNA API, written in DirectX 11 so as to allow XNA devs to port to C++ easily and use the new Metro UI. Even at the time I knew it was massively ambitious, but my fall back position was to write enough of the API so that I could port Alta, and so Sphero, to give me a new motivation to work on Sphero - get it done as a launch game for Windows 8. 

I set about the task, and actually got pretty far, completing a number of the smaller XNA classes in their entirety. Then I hit the real meat of the Graphics classes and started struggling. I wouldn't be in a position to properly test any of them until I'd written more, and without knowing DirectX 11 properly I could be heading in completely the wrong direction, but might not realise it for weeks. Still, I persevered (again).

Then a few things happened at once. Work got very busy. I was leaving for work and hour and a half earlier than usual and leaving at the same time. I was averaging maybe 5-6 hours sleep a night. I'd started lectures again for my MSc, which was taking up a lot of my time with assignments. And then Steve Jobs died.

It's strange. I'm not a huge apple fan. I have a macbook, but spend most of my time on MS products. I have an iPod touch, but my phone runs Windows Phone 7. I'd always seen Steve Jobs as an impressive leader and a good salesman, but I would never have considered him an idol. 

Yet when he died, I felt sad. Even now, months later, I have no idea why. But I did. This was before all of the media hype surrounding his death, before watching his famous Stanford address, or reading his sister's eulogy. When I heard the news I felt sad. And, as it played out in the media for the next few days, I, like tech enthusiasts and wannabe entrepreneurs everywhere, started re-evaluating my life.

I started making changes. I took up running. For reference, at school I could never finish the 1500m without stopping to walk at some point. I started following a program recommended by the NHS called Couch to 5k, which aims to get you from doing no exercise at all to running 5k over the course of 9 weeks. I've just finished week 8, and my last run was 4.9k, 28 minutes. I'm literally running further and for longer than I ever have in my entirely life. Why? To prove to myself that I can.

I knew I needed to work some exercise into my routine, but I never feel that I have the time. I knew running took the least time and would burn a lot of calories quickly, that I could do it in the park next to my house so there'd be no travel time built in, but I ruled it out because 'I can't run long distances'. In the aftermath of Steve Jobs' death I started questioning any and all assumptions that started with 'I can't'. 

As you might expect, I also applied this approach to my game development. By this point I had 4 projects on the go. That was at least 2, maybe 3 too many, so I started by questioning why I had so many. I wasn't ready to throw in the towel on Sphero, I still believed it was a genuinely fun game, I just had to get the level design right. My lighting engine was maybe 70% done, but the bugs I'd hit had made me lose momentum, and I'd let them beat me. My mobile game was just not any fun, and my native XNA port was horrifically over-ambitious. 

So I started cutting down my projects. My lighting engine was the closest to completion, I was genuinely learning a lot about shaders, and so it was fulfilling it's goals. The best way to get that project off of the books was to finish it, so that would be my first priority. 

My mobile game was DoA. The only reason I hadn't written it off was that I didn't want to give up on a project. But in this case, it was dead weight. The whole point of prototyping was to weed out the ideas that are no good, and this was one of those. Holding on to it would do me no good. So I declared it scrapped, with no intention of ever returning to it. 

My Native XNA port was tricky. The reasons for starting it were based on conflicting objectives. On the one hand, porting Sphero assumed both that I'd finished it, and that I'd be staying an indie developer. On the other, I wanted to do it to get C++ experience to get into the industry. It was an embodiment of my own indecision over my future path in games development. So, reluctantly, I put in into a perma-hiatus. I may one day resurrect it, but only if I actually need a C++ XNA port.

That leaves me with 2 projects, my lighting engine, which I'd work on first, and then back to Sphero. It was important to me to get the lighting engine finished first. I wanted to prove to myself that I could actually finish a project that I'd designed myself from the ground up, not just a clone of an existing game. 

2-3 months on, and I've finished the lighting engine. The source is on codeplex, and I'm in the process of writing an accompanying tutorial series to guide others through how to develop their own. I'll be posting the series up here as I complete it.

And then, come the new year, it'll be back to Sphero. I had toyed with the idea of taking up a new project that would help me bring Sphero to more platforms than just the Xbox and Windows, and might also help speed up the process of developing Sphero, but I've resisted the temptation for now. Again, if the reasons are right, I might consider it, but I'm done with changing projects to try and get myself out of a rut. 

So, without further ado, I shall shortly be posting the first (and possibly second) tutorial in my new series on developing a 2D lighting/ shadow engine. I hope other people can get as much from it as I did developing it.  

Sunday 18 September 2011

Opinion: Why XNA isn't dead (yet).

Its been some time since I wrote here, and hopefully I'll soon have time to fill you all in on what I've been up to the last couple of months, but before I do that, I want to comment on the current state of XNA, and my opinion as to it's future.

Background


Earlier this week, Microsoft kicked off it's BUILD conference, looking at Windows 8, the new Metro UI, and all of the new technologies that sit underneath it. As the keynotes and session videos started to appear online, there was one technology conspicuous by it's abscence: XNA. Naturally Twitter started to get worried.

On the second day of the conference, in a session on developing DirectX games for Metro UI, some brave attendee asked the question: What about XNA?

The response was essentially: You can't use XNA with Metro UI.

Later on Giant Bomb posted with an official statement from Mircosoft: http://www.giantbomb.com/news/the-future-of-xna-game-studio-and-windows-8/3667/

The statement said (reprinted from Giant Bomb):

“XNA Game Studio remains the premier tool for developing compelling games for both Xbox LIVE Indie Games and Windows Phone 7; more than 70 Xbox LIVE games on Windows Phone and more than 2000 published Xbox LIVE Indie Games have used this technology. While the XNA Game Studio framework will not be compatible with Metro style games, Windows 8 offers game developers the choice to develop games in the language they are most comfortable with and at the complexity level they desire. If you want to program in managed C#, you can. If you want to write directly to DirectX in C++, you can. Or if you want the ease of use, flexibility, and broad reach of HTML and Javascript, you can use that as well. Additionally, the Windows 8 Store offers the same experience as the current App Hub marketplace for XNA Game Studio, providing a large distribution base for independent and community game developers around the world.”

Why people are worried


The upshot of all of this is the following - XNA games can still be made for Windows 8, but only as Desktop apps, not Metro apps.

This also means that XNA games can be listed in the Windows 8 app store, but won't be sold through it, instead the listing will link to an external website of the developer's choice to allow users to buy the game.

This is an obvious barrier to people buying XNA games, especially when you consider that both casual games built with HTML5/ Javascript and high performance games built with DirectX 11 will have access to the Metro UI, and so can be sold directly in the app store. Why would someone click through to a site they might not have heard of, and fill out their credit card details, when the app store already knows their details and they can buy safely with a single click?

The final blow for XNA devs is that Microsoft have announced that the ARM version Windows 8 will only support Metro UI apps, not Desktop apps. So if you were hoping to bring your XNA game to a Windows-powered tablet audience, you're out of luck.

You'd be forgiven for thinking that this clearly shows Microsoft is abandoning XNA as a future technology for gaming on it's Windows platform. It also calls into doubt whether XNA will be supported on future versions of Windows Phone, and on the next version of the XBOX.

So that brings us to our next question:

Why isn't XNA supported with Metro Apps?


It seems to many that if Microsoft was serious about XNA as a technology then this was a prime opportunity to make XNA a first class citizen in their Windows eco system. Both large and small studios have been using XNA for their games on Windows Phone 7, and more and more successful indie titles on Steam are using XNA as well (Terarria, Magicka, and most recently Bastion, to name only a few). Surely allowing XNA developers to build games for a new generation of Windows powered tablets is a no brainer?

Turns out, it's not as simple as you might think. The reason? I think it comes down to one thing: DirectX 9.

XNA is build on top of DirectX 9. DirectX 9 is now a pretty old technology, and Microsoft has decided that DirectX 9 will not work with Metro UI. Personally I agree with this decision, DirectX 9 games are still built in order to reach Windows XP users, and more importantly, because it allows code sharing with Xbox 360 builds. However, the speculation in the gaming press is that the Xbox vNext is in development, and that we'll probably see an announcement at some point next year.

So if DirectX 9 doesn't work with Metro UI, then by extension neither does XNA. And let's face it, Microsoft was never going to spend time and effort supporting DirectX 9 with Metro UI just for XNA games. It would mean including a DX9 runtime in the ARM version exclusively for XNA games, and probably a million other headaches I've not even thought of.

Why I'm not worried (yet)...


Strangely, since finding out why XNA isn't supported in Metro UI, most of my fears that XNA was dead have faded away. I'm going to try and explain why.

Let's think about this another way. What would Microsoft have needed to do in order to support XNA with Metro Apps out of the box? The way I see it they had 2 options:

They could have supported DirectX 9, but there are plenty of reasons they wouldn't want to do that (see above).

Or they could have re-written XNA to run on DirectX 11 under the hood.

Lets unpack that second option for a second. The Xbox 360, and probably Windows Phone 7, use some form of DirectX 9. That means that either a DirectX 11 version of XNA would be Windows only, and we'd still need to use the current XNA version for developing on Xbox and the phone, or else the API would need to be static and a DirectX 11 code path would need to be put in place for Windows.

That would be a fair amount of work, but you might think that it's worth it if Microsoft is serious about XNA. Unless you consider the Xbox vNext.

Most likely the Xbox vNext will run on DX11 or some form of it. If Microsoft plans to put XNA on Xbox vNext (as I hope they do if they're serious about it), then it would make sense to do a full DX11 rewrite of XNA at that point. They can't do it now, because I doubt the software stack for the new Xbox is anywhere near being nailed down yet.

So in conclusion, I believe the lack of Metro support for XNA means one of two things. Either our worries are justified, and Microsoft plans to cut XNA loose OR Microsoft is actually really serious about XNA as a future technology, and is waiting until it can be rewritten for the Xbox vNext and Windows 8 at the same time.

Until we know if XNA is going to be on the Xbox vNext, I haven't given up hope.

Thursday 28 July 2011

Design-time: first area map

It's been a while since my last update for one reason or another. I took some time out from working on Sphero to start-up a small side project. I won't elaborate on it here now, as I'm not sure if its going to go anywhere. If it does then I'll be sure to post an update :)

I started back on Sphero a bit over a week ago, and I've finally turned my hand to designing the map/ puzzles for the game.

The game is essentially a cross between a Metroidvania and a puzzle platformer. The world is split into 5 areas: the Forest, the Ice Wastelands, the Mine, the City, and the Core. In each of the first four areas there are one of the 4 totems which unlock one of the gameplay mechanics (double-jumping, turbo speed, wall-crawling and er, being on fire!).

The two images below show what I've designed so far. The first is a general guide for the 5 areas, and the second is the detailed design for the Forest area. The overlay with arrows/ words etc is just a guide so that I know when I need to design the puzzles in such a way that the player will need a certain ability to proceed:




You may have noticed the little yellow section marked 'ship' in the first image. You'll just have to wait and see what that's all about... ;)

The map might not look like much, but it's taken a lot, lot longer to design these puzzles than I'd ever expected. I seem to be saying this a lot lately, but design is hard! Given that a puzzle platformer lives or dies by the quality of its puzzles, its balance between being challenging enough and not being frustrating, and its learning curve, there's a lot to conisder, and I'd be very surprised it I don't have to make significant changes to all of the puzzles I've designed so far before I'm through, but its a start.

Anywho, that's all for now, I'll post again when I have the map for the next area finished.

Catch you next time.

Sunday 12 June 2011

Updates part 2: Mode switching

And as promised, a video showing off switching mode with the right thumb-stick, Crysis style. It should probably be noted that all the artwork, including the interface for mode switching, is just place-holder art at the moment.

A quick note on what each of the modes are: Black is normal, Red currently is the same as normal (but will be on fire!), Grey allows a double jump (Air), Blue increases movement speed (Lightning), and Brown allows wall crawling (Rock).

You'll just have to use your imaginations until I get some proper art work in place!

Here's the video, enjoy!

Saturday 11 June 2011

Updates part 1: Parallax layers

I have 2 updates to share, but the 2nd might have to wait until tomorrow, as it takes a while for new videos to show up on the youtube search.

I've been v. busy the past few weeks with prep for exams (I'm taking a Masters in Computer Science in the evenings alongside my day job), so haven't had a lot of chance to work on Sphero. What I have managed to do is to add parallax layers and a new way of switching mode (with the right thumb-stick, Crysis style).

Here's a video of the parallax layers (excuse the programmer art...)


I'll add in a video of the mode switching tomorrow when it appears on the youtube search.

So what's next? Well, its finally design time! I'm going to be designing the map in full over the next few weeks, from which I should generate a nice big list of features that need to be added to the engine and the editor, so that I can add the various features of the world to the maps.

I'll probably upload sketches of the map design during design to give you a flavour of what the finished game will be like, but otherwise its going to be quite quiet until I finish designing and start implementing features.

Sunday 8 May 2011

Map transitions

Just a quick video of map transitions between world sections in Sphero, based on those you come across in the old SNES/ Gameboy metroid games.





I'm still working on the editor extension that will let me arrange world sections in a grid in my editor. Fingers crossed I'll finish that today.

That's all for now, hopefully I'll have time for an update next weekend! Have a good week!

Friday 6 May 2011

Progress report...

Ok, so once again I didn't manage the promised update last week, mostly due to starting a new role at work, but also because I've been making steady progress on Sphero. I've added transitions from one section of the map to the next, which I'll upload a quick video of at some point over the weekend (fingers crossed).

I'm also adding a mode to my editor which will let me edit the map as a whole, as well as the individual sections. That will probably take me a little while, and I still need implement the ability to add and edit parallax layers in the editor, so looks like I have some more tools programming ahead of me! Fortunately I can build on the functionality I already have with my editor, so that should make life a lot easier!

After that I think I probably need to knuckle down to some actual designing of the game map, including hazards  obstacles  puzzles. From that I should be able to pull out a list of features that I need to implement in game code/ the editors. Then I'll need to actually create the basic map.

Then it'll be on to designing enemies and their abilities, before implementing them in code, and adding them to the maps. By that point I should have a working game, albeit with place holder art and no sound. Art will come next, including graphical effects/ particles/ shaders that will need to be implemented in code, along with scenery, backgrounds, and animations.

Then will come the sound/ music. And then the trimmings, UI, title/ start screens, HUD, along with any other features that I've thought of along the way that can be added with little hassle at the end.

So there we go, that's the rough roadmap for Sphero. Looking at it, if I have it ready for DBP 2012 (assuming there is one), I will be very, VERY impressed with myself... *GULP*

Wednesday 27 April 2011

Newflash: Collaboration is hard!

A quick update (but hopefully more to follow in the next couple of days).

I've been working for the last month or so on a collaboration project with another XNA developer that I met over Twitter. I won't discuss the game itself as the concept doesn't really belong to me, but this was to be my first time collaborating with someone else on a project, and I was pretty excited about it. However, I soon realised that collaboration is actually pretty hard!

The game we were working on was a concept that the other developer had already given some thought to, although I had some input in what direction we were going to take it, and in refining the concept in general. This process was actually pretty painless, I've always considered game design to be one of my weaker areas, so I was happy to defer in most situations where there was a difference of opinion.

The biggest challenge for me was working with the code itself. Alta is still very immature, and its code is very slap-dash where I've thrown in extra features as I go along. At some point later this year I plan to do a massive overhaul of Alta, both to do some much needed refactoring/ restructuring, but also to tidy up the code and document it so that another developer coming in could use the engine without much trouble.

Anyway, due to Alta not being ready for collaboration, we used the other developer's engine as a codebase. This meant me working to his coding style and standards, which is a completely new experience for me, and one that I'll admit I struggled with at first. It felt very limiting in some ways.

When I have no restrictions on how I code, I can work very iteratively, throwing in a few lines of code, hitting build and run, dealing with any errors, testing, and then going back to the beginning and fixing bugs/ refining. This tends to work very well for me, and makes me feel like I'm being productive. The downside is that if I don't take the time to go back and refactor my code and generally neaten it up and document it, then it becomes a mess that only I can navigate (much like my room as a teenager!).

So working to a coding style that was very neat and ordered was a big transition for me. I found myself holding back on checking in code that I'd written because although it worked (usually! no one's perfect...) it wasn't neat/ in the same style as the other code in the engine, and later the game code. This meant that I wasn't checking in code as quickly as I'd like, and features and progress seemed to take a long time to implement. This in turn lead to a down turn in morale for me, as the project was very quickly falling behind schedule.

Added to that, both myself and the other developer had other commitments, myself mostly to work, college, and family, and he to other projects that he was already working with others on. Perhaps inevitably, eventually we both agreed that the project should be sidelined for the moment, and that we'd come back to it later on.

I still intend to work on the project as a side project, and I'd really like to collaborate with the developer again in the future, as he's a great guy, and despite the initial pain I described above, I really learnt a lot from the short time that I was working with him.

But for now, I'm back to focusing on Sphero. It won't be in any way ready for DBP this year, but my aim is to have a good working prototype with placeholder art ready by then, and then spend the next year refining it and creating or commissioning the art, and polishing it till it shines.

In the meantime I'll still be posting about Farseer as I learn more about it, about the development of Alta as I progress through creating Sphero, and any other projects I pick up along the way (my Unity interface is still on the cards!).

That's all for now, I hope to have a mini roadmap for Sphero to share at some point tomorrow.

Bye for now!

Saturday 5 March 2011

I promised two blog posts today, and for once I've exceeded a target!

I just want to post an update on DBP 2011. I've decided, rather than try to rush and finish Sphero in time for the June deadline, I took up an offer of a collaboration from one of the devs I follow on twitter.

I'm pretty excited about it, as it'll be the first collaborative project I've worked on.

I won't give anymore details about the collaboration at this stage, as its really my new team mates gig, but hopefully I'll be able to post an update in the future.

I'll still be working on Sphero and Alta (in fact we'll probably use Alta's editors in some form for the game), and another project I'll talk about in a second, but the collaboration will be the focus. You can still expect Sphero to make an appearance at DBP 2012! :)

One final note, I've started work on a side project, which I've dubbed 'Xnity'. As the name suggests, its a bridge between XNA and Unity, and will consist of some (but not all) of the core xna classes.

The aim is to minimize the number of code changes xna devs need to port their games to Unity, essentially just using Unity as a compatibility layer to reach more platforms than xna offers alone.

It will be an open source project, and I'm hoping other xna/ unity devs will get involved as well.

This is very much a side project, but I'd still like to get through a few classes a week.

Well, that's plenty of blogging for now, I have work to do!

Quick 2d terrain with Farseer

This is going to be a quick tutorial on one approach to implementing 2d terrain in Farseer. I'm going to assume you already have a reasonable grasp of xna and at least a basic grasp of Farseer, and that you either have access to an extensible map editor, you can build one, or that you're happy to create the coords for your terrain in some other way (perhaps in ASCII in a text file, or hard coded values).

However you get your data, for this tutorial you'll need to get it in the following format:

• Your terrain will need to be divided up into 'ledges', or continuous solid lines.

• Your ledges will need to be divided up into 'edges', a straight line between 2 'nodes'. The end node of one edge must be the beginning node of the next.

• Each ledge must be stored as a List of Vector2, each containing the coordinates in screenspace of the nodes (in the order they appear from one end of the ledge to the other).

Obviously you can adapt this, (array instead of list, Point instead of Vector2 etc) but for now I'll assume we are using the above.

We can now plug into the following method:


public CreatePhysicsLedge(List<Vector2> vectors, float friction, float restitution)
{
    body = BodyFactory.CreateBody(game.world);
    body.BodyType = BodyType.Static;
    body.IsStatic = true;
                
    fixtures = new List<Fixture>();

    for (int i = 1; i < vectors.Count; i++)
    {
        Vector2 tempVec1 = ConvertUnits.ToSimUnits(vectors[i - 1]);
        Vector2 tempVec2 = ConvertUnits.ToSimUnits(vectors[i]);
        EdgeShape shape = new EdgeShape(tempVec1, tempVec2);
        if (i != vectors.Count - 1)
        {
            shape.HasVertex3 = true;
            shape.Vertex3 = ConvertUnits.ToSimUnits(vectors[i + 1]);
        }
        if (i != 1)
        {
            shape.HasVertex0 = true;
            shape.Vertex0 = ConvertUnits.ToSimUnits(vectors[i - 2]);
        }
        fixtures.Add(body.CreateFixture(shape));
    }
    for (int i = 0; i < fixtures.Count; i++)
    {
        fixtures[i].Friction = friction;
        fixtures[i].Restitution = restitution;
    }
}

This makes use of the ConvertUnits class from the Farseer samples.

And that's it! Simple as that.

Hopefully that will save some other devs some time here or there. Enjoy!

Promised update!

Two posts today. First of all, a Sphero update. I originally intended to spend about a day implementing wall-crawling. In reality its taken me over a week, mostly due to me not understanding how Farseer works.

Needless to say I got there in the end, and I have a better understanding of Farseer as a result, which I'll go into in a second. First though, a video of wall-crawling in action:


Not much to look at, I know, but a lot of blood sweat and tears (ok, coffee, pondering, and swearing at my monitor) went into getting it working, so I feel I should share some of the lessons I've learned.

My 2d terrain is made up of EdgeShapes, which are polygons with 2 vertices and a single edge. In other words, lines!
In order to stick to the various surfaces, I decided to look at all of the Fixtures that the character is touching each frame, check to see if they're terrain, and if they are, add the collision normals between the character and each fixture together, and apply a force to the character in the opposite direction.
That probably wasn't very clear, so here's a handy diagram to demonstrate the idea:

The first problem I hit was, well, it didn't work. My character, instead of sticking to surfaces, was instead floating above them.
Debugging showed that although most of the normals were reporting correctly, some were coming out as straight up.
Let me back track a bit, and show you how I implemented this:

{
    {
        ContactEdge tempContactEdge = circle.Body.ContactList;
        Contact tempContact;
        Vector2 normalSum = Vector2.Zero;
        int count = 0;
        if (tempContactEdge != null)
        {
            while (tempContactEdge != null)
            {
            if (tempContactEdge.Contact.FixtureA.CollisionFilter.IsInCollisionCategory(Category.Cat11) || tempContactEdge.Contact.FixtureA.CollisionFilter.IsInCollisionCategory(Category.Cat12))
            {
                tempContact = tempContactEdge.Contact;
                count++;
            }
            else if (tempContactEdge.Contact.FixtureB.CollisionFilter.IsInCollisionCategory(Category.Cat11) || tempContactEdge.Contact.FixtureB.CollisionFilter.IsInCollisionCategory(Category.Cat12))
            {
                tempContact = tempContactEdge.Contact;
                count++;
            }
            else
            {
                tempContactEdge = tempContactEdge.Next;
                continue;
            }
            if (tempContact.Manifold.PointCount != 0)
            {
                Vector2 tempNormal;
                FixedArray2<Vector2> tempPoints;
                tempContact.GetWorldManifold(out tempNormal, out tempPoints);
                normalSum += tempNormal;
            }
            else
            {
                count--;
            }
            tempContactEdge = tempContactEdge.Next;
        }
        if (count > 0)
        {
            circle.Body.IgnoreGravity = true;
            normalSum /= count;
            normalSum.Normalize();
            normalSum *= (-gravForce);
            circle.Body.ApplyForce(normalSum);
        }
    }
}

I grab a reference to the characters contact list, which is essentially a doubly linked list of Contacts. I iterate through it, looking for Contacts that contain fixtures that are in my terrain collision category, and then for those that are, I call the contact's GetWorldManifold() method, which gives me back a collision normal and the manifoldPoints of the collision in world coordinates. I've included a quick diagram to illustrate what a manifold is below, just in case you're not familiar with it:




I keep a cumulative sum of the normals as well as a count of how many there are, and then take an average at the end. Then all that remains is to normalize the average (just in case), and apply a force on at the centre of our character in the opposite direction.


So you can see that random additions of normals pointing in the wrong direction could cause problems!


After some digging inside Farseer I discovered that GetWorldManifold() returns Vector2.UnitY when called on a contact which has manifold.pointCount = 0 (the points it is referring to are those in the diagram above).


This confused me no end. My understanding of the engine was that a contact being included in an object's contact list meant that the two objects had collided. Turns out that this isn't quite the case.


After posting on the Farseer forums (a great bunch, just make sure you search before asking!) to see if this was a bug, I was put straight. A contact is formed when two AABBs (think of them as large bounding boxes) intersect. Each shape or fixture has a AABB to quickly check for possible collisions. The engine can then check whether the two Fixtures associated with the AABBs have collided and make them react appropriately. If you'd like an illustration of AABBs and what they do, download the Testbed solution from the Farseer codeplex page and hit F4 while its running.

So whenever my character got to the end of an EdgeShape, it would hit the AABB of the next EdgeShape, but wouldn't yet be in contact with the shape itself, causing manifold.pointCount to be zero, and giving me my floaty behaviour.

This was easy to fix, I just had to make sure manifold.pointCount > 0 and it was fixed. Or so I thought.

It worked just fine for concave edges and shapes, I.e. the inside of a room, which was exactly what my test map happened to be.

However, when I then built a map with an island in the middle, the character would get to a convex (or 'pointy') join between edges, and then fall off.

If we look back at the algorithm I described, it should be fairly obvious what was happening: the character would lose contact with all fixtures, and my code then reapplied gravity until it touched another fixture, I.e the floor.

So my solution was to attach a second, larger circle fixture to the character body with zero density, and set isSensor to true.

The idea was to use this as a backup, so that when the main character fixture loses contact with all fixtures, we can use the sensor to check if there are any other fixtures near by, and we can use that to give us the direction of our wall crawling force.

The final issue I came across was that sensors don't generate manifolds, so I had to manually calculate the normals on the EdgeShapes.

After all that, I ended up with the video above. *Phew*

That was a bit long winded, so for those of you that prefer code, I've added the finished code snippet below.



{
    {
        ContactEdge tempContactEdge = circle.Body.ContactList;
        Contact tempContact;
        Vector2 normalSum = Vector2.Zero;
        int count = 0;
        if (tempContactEdge != null)
        {
            while (tempContactEdge != null)
            {
            if (tempContactEdge.Contact.FixtureA.CollisionFilter.IsInCollisionCategory(Category.Cat11) || tempContactEdge.Contact.FixtureA.CollisionFilter.IsInCollisionCategory(Category.Cat12))
            {
                tempContact = tempContactEdge.Contact;
                count++;
            }
            else if (tempContactEdge.Contact.FixtureB.CollisionFilter.IsInCollisionCategory(Category.Cat11) || tempContactEdge.Contact.FixtureB.CollisionFilter.IsInCollisionCategory(Category.Cat12))
            {
                tempContact = tempContactEdge.Contact;
                count++;
            }
            else
            {
                tempContactEdge = tempContactEdge.Next;
                continue;
            }
            if (tempContact.Manifold.PointCount != 0)
            {
                Vector2 tempNormal;
                FixedArray2<Vector2> tempPoints;
                tempContact.GetWorldManifold(out tempNormal, out tempPoints);
                normalSum += tempNormal;
            }
            else if (tempContactEdge.Contact.FixtureA.CollisionFilter.IsInCollisionCategory(Category.Cat12))
            {
                EdgeShape eShape = (EdgeShape)tempContactEdge.Contact.FixtureA.Shape;
                Vector2 v = eShape.Vertex2 - eShape.Vertex1;
                Vector2 w = sensorCircle.Body.Position - eShape.Vertex1;
                float t = Vector2.Dot(w, v) / Vector2.Dot(v, v);
                v = eShape.Vertex1 + (t * v) - sensorCircle.Body.Position;
                v.Normalize();
                v *= -1;
                normalSum += v;
            }
            else if (tempContactEdge.Contact.FixtureB.CollisionFilter.IsInCollisionCategory(Category.Cat12))
            {
                EdgeShape eShape = (EdgeShape)tempContactEdge.Contact.FixtureB.Shape;
                Vector2 v = eShape.Vertex2 - eShape.Vertex1;
                Vector2 w = sensorCircle.Body.Position - eShape.Vertex1;
                float t = Vector2.Dot(w, v) / Vector2.Dot(v, v);
                v = eShape.Vertex1 + (t * v) - sensorCircle.Body.Position;
                v.Normalize();
                v *= -1;
                normalSum += v;
            }
            else
            {
                count--;
            }
            tempContactEdge = tempContactEdge.Next;
        }
        if (count > 0)
        {
            circle.Body.IgnoreGravity = true;
            normalSum /= count;
            normalSum.Normalize();
            normalSum *= (-gravForce);
            circle.Body.ApplyForce(normalSum);
        }
    }
}

That's it for this post, but I'll be putting another post up in a bit with a short tutorial on 2d terrain in Farseer.
See you in a bit.

Wednesday 2 March 2011

Updates soon!

I've been reinstalling my operating system recently, so I haven't been able to post, but I have an update that I'm prepping for the weekend, plus hopefully a quick tutorial on 2d terrain with Farseer.

More soon!

Saturday 5 February 2011

Sphero progress update

The first of many updates on Sphero. My plan is to blog all the way through the development of the game to try and keep myself on track. Anyone who's gone back and read the previous entries in this blog will see that keeping on track is something that I struggle with!

So without further ado, here is the first progress update on Sphero.

So far I've implemented creating 2D terrain inside my map editor in the form of ledges or solid paths. You simply click where you would like the next node to go. The nodes can also be dragged and moved around to alter the shape of the ledge.

I've also implemented this ledges into Alta, having the engine create EdgeShapes using Farseer to create solid terrain.

Finally, I've created the first prototype of the basic movement mechanics for the main character. They're very rough at the moment, but as I'm going for a rapid prototyping approach, they're good enough at this point.

Finally, I whipped up a quick map and put together a demo:

http://www.youtube.com/watch?v=69jQG3r5I88

That's all for now. I'm hoping to get some more of the movement mechanics done today, so I may have another video for you soon!

Thursday 3 February 2011

2D Rendering with SpriteBatch: My approach

I'll be posting an update (along with hopefully a video of the first prototype) of my progress on Sphero at the weekend, but today I wanted to take a second to share my approach to 2D rendering using SpriteBatch.
SpriteBatch itself if an optimisation feature of XNA, grouping and/or sorting sprites to reduce the number of draw calls or textures swaps that need to take place to render the scene. However, the performance benefit from using a SpriteBatch can vary depending on how its used.

The approach I took was to try and figure out what the optimal way of using sprite batch was, and then try and work the rest of my engine around that. The following is the approach I came up with.

***HEALTH WARNING***

This method has not been tested on an Xbox, so I can give no guarentees about it's performance.

At the centre of the set-up are a class, a struct, and 2 dictionarys.

The key to it all is the RenderManager class. However, I won't say too much about this class just yet.

I'm going to try and go through these in a logical order. First of all, we have a dictionary which contains all of the textures we use in the game, with their names as keys (you could use unique ID number if you prefer, I

just find strings easier to work with and remember). All textures are loaded through the RenderManager class, which loads the texture from the content pipeline, and stores it in the Texture dictionary with its name as its key.

The next piece of the puzzle is the RenderObject2D struct. A better name might have been the 'sprite' class, as this is essentially what it is. It holds all of the information needed for the RenderManager to draw the sprite that it describes. This includes a string representing the texture of the sprite.

The penultimate item that makes up this system is the Layer Dictionary.

This is not quite as it sounds. This is not a Dictionary, but rather a Dictionary. The int refers to the order of the layers, the string is used by the game code to tell the RenderManager which layer to add the new sprite to.

Finally we get to the RenderManager.

The RenderManager does 3 main jobs. It loads and stores textures (as we've already seen). It keeps track of all of the sprites to be drawn this frame, and it does the actual drawing.

Essentially the way all of this fits together is this:

Each frame, the game code calls RenderManager.AddToScene() in place of its normal draw calls. This (heavily overloaded) method takes all of the parameters you would need in a SpriteBatch.Draw() call, with two big differences. The first is that instead of a reference to a texture, it passes the name of the texture, and the second is that instead of a depth parameter, it passes the name of a layer.

Inside the RenderManager, every time AddToScene() is called, a RenderObject2D struct is created and filled out with all of the parameters barring the layer name.

Next, the RenderObject2D Struct is stored inside what is effectivey a 3 dimensional array (List>> scene). This is indexed by layer and texture.

In plain english, the scene is a List of layers. Each layer is a list of sub-layers grouped by texture. Each sub-layer is a list of RenderObject2Ds.

After all other components have had 'draw' called on them, RenderManager's draw() method is called. The RenderManager then starts a spriteBatch in 'immediate' mode, and then loops through the scene, starting with the backmost layer, looping through each Texture sub-layer to ensure the minimum number of texture swaps.

Essentially this means the sprites never need sorting, because they were filed away in the right place when they were added to the RenderManager's scene.

There are some optimisations around the Lists to ensure that they dont alloate memory each frame from resizing, and it could be made more efficient by drastically reducing the size of a RenderObject2D by using a numeric identifier for textures, but that should give you a flavour of how I've approached dealing with SpriteBatch.

Now the question is, have I got this completely wrong? Is there a glaringly obvious reason not to go down this route? What are the pros and cons?

I'd love to hear your thoughts!

Sunday 23 January 2011

Updating the Farseer Platformer tutorial to Farseer 3.2 (footnote)

Because I'm forgetful, and blogger doesn't like me editing blog posts with code in them, I've added a seperate post on updating the tutorial from Farseer 3.0 and 3.2.

There are only 2 issues you should encounter upgrading the 3.0 tutorial to 3.2. The first is that CollisionEventHandler has been renamed as OnCollisionEventHandler (but the only the name appears to have changed).

The second is that fixture.IgnoreCollisionWith() is now fixture.CollisionFilter.IgnoreCollisionWith(). And that's pretty much it! There are other changes between the two librarys, but these are the only ones we'd encounter.

If you need help updating the rest of the code from XNA 3.1 to 4.0, I suggest you check out the upgrade cheat sheet on xdsk2: http://www.nelxon.com/blog/xna-3-1-to-xna-4-0-cheatsheet/

Updating the Farseer Platformer tutorial to Farseer 3.2

***UPDATE***


This is an update of my previous post updating @roytries's Farseer Platformer tutorial on Sgt. Conker to Farseer 3.0. Since posting that item Farseer games have released Farseer Physics 3.2. This updated version will cover how to bring the original tutorial up to date to version 3.2, and I have added a secion at the end highlighting the changes from 3.0 to 3.2.


*************

Updating the Farseer Platformer tutorial to Farseer 3.2. 

The Farseer platformer tutorial on Sgt. Conker (here http://www.sgtconker.com/2010/09/article-xna-farseer-platform-physics-tutorial/) is a brilliant guide to getting started with Farseer and integrating it with your own drawing code rather than using the 'DebugView' that comes with the Farseer samples.

If you haven't already you should go and work through it, as it'll give you a good idea of what Farseer does and roughly how it works. Then when you're done, head back here and we'll go through how to get the tutorial working with Farseer 3.2.

Why bother? Can't I just use the version that the tutorial uses? 

Good question! I'll ask the second part of the question first. There is absolutely nothing stopping you using the older version of Farseer in your game, and indeed you may decide that it's preferable. The library for 2.1.3 compiles without error under XNA 4.0, and you may find it easier to work with because the units are simpler (see below).

So why would you bother to learn Farseer 3.2? Rather than trying to go over all the differences here, I'll direct you to the FAQ on the Farseer webpage:

http://farseerphysics.codeplex.com/wikipage?title=Differences%20between%20Farseer%20Physics%20Engine%202.x%20and%203.0&referringTitle=Tutorials

Its down to you to decide whether your project would benefit from the new features or not. 

On with the show!

If you're still reading then you're at least curious about updating the tutorial to Farseer 3.2. I'm going to start at the beginning of the tutorial and work my way through, highlighting the changes as I go. If you'd rather just look at the updated source code, then I've included the code files at the bottom of the page.

Set-up

First things first, head to the Farseer webpage and download the latest release. You can either just download the Farseer library, or you can download the version with samples included as this contains a helper class that we're going to be using in a moment. I've also included the helper class in the source below though, so just grabbing the library is enough.
Creating a project
This section should be largely unchanged. I've called my project FarseerPlatformer for consistency. The only thing to note is that you need to make sure you reference the correct FarseerXNA project (3.2 as opposed to 2.1.3).

My first physics

Here we start to see the differences in the new engine.

First of all we need to tweak the using statements that we add to Game1 slightly, as the namespaces in the library are different. Change them to:

using FarseerPhysics;
using FarseerPhysics.Dynamics;
using FarseerPhysics.Factories;
using FarseerPhysics.Collision;

In 3.2, we no longer have a PhysicsSimulator. Instead we simply have a World class which we create an instance of. For our purposes we can treat this as a re-naming of the PhysicsSimulator class.

We create a new instance of World as follows:


world = new World(new Vector2(0, 20f));

Two things to note here. First, that the constructor for World is the same as the constructor for PhysicsSimulator, taking a vector2D representing gravity as its argument. Second, that the magnitude of the gravity vector is 20, rather than 500. This is due to a difference in the units that 3.2 uses, which we'll discuss in a moment.

Next up is updating our World each frame. This is done by adding the following to the bottom of the update method (before base.Update(gametime)):

world.Step((float)(gameTime.ElapsedGameTime.TotalMilliseconds * 0.001));

world.Step() is the new update method for the World class. Note that we use milliseconds * 0.001 rather than seconds. This gives us a more accurate timing, which will improve our simulation.

Now we look to creating physics objects. Under 2.1.3, you created a body which managed and reacted to forces on your object, and a geom or geometry which was used for collisions, friction etc.

Under 3.2 you still have these (although a geom is now a Shape) but they are tied together by a Fixture. In fact we can create our object in one go using FixtureFactory. Before we do that however, we need to talk about units.

In 2.1.3, Farseer worked with pixels as its units, so all calculations were done in screen space. Box2D (the physics engine that 3.2 is based on) uses a system called MKS, or Meters, Kilograms, Seconds.

So, if we just punch in the dimensions of the block from the tutorial, the engine tries to simulate a block 64m x 64m, falling about 400m down the screen. As well as that, the engine is optimized for objects with dimensions between 0.1 and 10m, so its unlikely to give us good results.

All of this means we'll need to convert our dimensions from pixels to a scale that works with the engine, and back again when its time to draw. Fear not, help is at hand!

In the Sample project entitled 'Farseer Physics Engine 3.2
SimpleSamples XNA' that you downloaded (or in the source code below) you will find a class called ConvertUnits (in the folder Samples/DemoBaseXNA). This is a helper class designed to solve exactly this problem. Copy this file into your project and change the namespace to FarseerPlatformer (or whatever you called your project).

Now we're ready to create our block. Replace your SetupPhysics method with the following:

protected virtual void SetUpPhysics(World world, Vector2 position, float width, float height, float mass)
{
       fixture = FixtureFactory.CreateRectangle(world, ConvertUnits.ToSimUnits(width), ConvertUnits.ToSimUnits(height), mass, ConvertUnits.ToSimUnits(position));
       body = fixture.Body;
       fixture.Body.BodyType = BodyType.Dynamic;
       fixture.Restitution = 0.3f;
       fixture.Friction = 0.5f;
}

As you can see, the parameters for FixtureFactory.CreateRectangle are pretty much an amalgamation of those for creating the Body and Geom objects in the tutorial, and replacing the PhysicsSimulator with our World instance.

The only parameter that possibly deserves special mention is the 4th, which is density.
3.2 calculates mass of your object based on its area and density. For this guide I have just substituted in the values that the original tutorial gave for mass, but you'll need to play with all of the values you use to get the effect you want.

We use ConvertToUnits.ToSimUnits() to convert our parameters to those suitable for use in the simulator. If you have a look inside the source code for this class you'll see that most of its methods are overloaded for doubles, ints, vector2D etc, so we really can just throw this method at our problem.

We also need to make some changes to the draw method. Replace PhysicsObject.draw() with the following method:

public virtual void Draw(SpriteBatch spriteBatch)
{
       spriteBatch.Draw(texture, new Rectangle((int)ConvertUnits.ToDisplayUnits(body.Position.X), (int)ConvertUnits.ToDisplayUnits(body.Position.Y), (int)width, (int)height), null, Color.White, body.Rotation, origin, SpriteEffects.None, 0f);
}

Notice that we have to convert all the values that we get from our simulated body back to pixel units, as otherwise we'd have a very, very small box!

You can check out the full source for the new PhysicsObject class below.

Now you just need to tweak the values inside your call to the PhysicsObject constructor like so:


box = new PhysicsObject(world, new Vector2(100, 0), 64, 64, 10, squareTex);

And we're done for this section. You box should be falling down the screen in all it's 3.2 glory.

Laying grounds

We actually don't need to change much in this section for the example to work, we just need to swap out the PhysicsSimulator parameter for a World parameter:

public class StaticPhysicsObject : PhysicsObject
{
    public StaticPhysicsObject(World world, Vector2 position, float width, float height, Texture2D texture)
        : base(world, position, width, height, 1, texture)
    {
        fixture.Body.BodyType = BodyType.Static;
    }
}

However, when you run your program, you may notice that there is a gap of a couple of pixels between the box and the ground when its at rest. I have two possible explanations for this, but I haven't been able to confirm either. The first is rounding errors when we convert our units, and the second is the fact that Farseer 3.2 adds a 1cm 'buffer' around objects in the simulation, to allow for Continuous Collision Detection (CCD), which is a new feature of the engine.

You can either live with this visual discrepancy, or tweak the spritebatch.draw() call in PhysicsObject to look like this:


spriteBatch.Draw(texture, new Rectangle((int)ConvertUnits.ToDisplayUnits(body.Position.X), (int)ConvertUnits.ToDisplayUnits(body.Position.Y), (int)width + 1, (int)height + 1), null, Color.White, body.Rotation, origin, SpriteEffects.None, 0f);

I've left the source code with the original that shows the gap, but you can do either.

Joints

RevoluteJoints seem to be much the same in 3.2 as they were in 2.1.3, so we only really need to tweak the CompositePhysicsObject class in a few places. Check out the class below:


public class CompositePhysicsObject
{
        protected PhysicsObject physObA, physObB;
        protected RevoluteJoint revJoint;

        public CompositePhysicsObject(World world, PhysicsObject physObA, PhysicsObject physObB, Vector2 relativeJointPosition)
        {
            this.physObA = physObA;
            this.physObB = physObB;
            revJoint = JointFactory.CreateRevoluteJoint(world, physObA.fixture.Body, physObB.fixture.Body, ConvertUnits.ToSimUnits(relativeJointPosition));
            physObA.fixture.CollisionFilter.IgnoreCollisionWith(physObB.fixture);
            physObB.fixture.CollisionFilter.IgnoreCollisionWith(physObA.fixture);
        }

        public virtual void Draw(SpriteBatch spriteBatch)
        {
            physObA.Draw(spriteBatch);
            physObB.Draw(spriteBatch);
        }

}

I won't painstakingly go through the changes as a quick look at the old and new classes side by side should highlight the differences.

However, one thing to note is that the last parameter in the call to JointFactory.CreateRevoluteJoint() is just relativeJointPosition, rather than physObA.Position + relativeJointPosition (I changed poA to physObA and poB to physObB).
This is because in 3.2 this method takes the position of the joint in the local coordinates of the first Body in the parameter list, rather than in world coordinates as in the old method.

Again keep an eye out for when to use pixels as your units, and when you need to use ConvertUnits to convert to meters.

Springs

Unfortunately AngleSpring isn't in 3.2. According to Farseer's creator it shouldn't be too much work to port the AngleSpring from 2.1.3 to 3.2, but I decided not to go down that route (if you decide to give it a go though, be sure to share your results with the community!).

Instead, I decided to 'fake' an AngleSpring with an AngleJoint with a TargetAngle of 0, and play around with it's Softness and BiasFactor properties until I found some values that seemed to work.

The result was similar the original, but not quite the same. If you come up with a better solution please do share it with the community!

Below is my faked SpringPhysicsObject, and the call to create it. The values are just those that I found to work, feel free to play around with them and see if you can get a better result.

public class SpringPhysicsObject : CompositePhysicsObject
{
        protected AngleJoint springJoint;

        public SpringPhysicsObject(World world, PhysicsObject physObA, PhysicsObject physObB, Vector2 relativeJointPosition, float springSoftness, float springBiasFactor)
            : base(world, physObA, physObB, relativeJointPosition)
        {
            springJoint = JointFactory.CreateAngleJoint(world, physObA.fixture.Body, physObB.fixture.Body);
            springJoint.TargetAngle = 0;
            springJoint.Softness = springSoftness;
            springJoint.BiasFactor = springBiasFactor;
        }
}

Force

Once again we don't need to change much here beyond the PhysicsSimulator/ World change. The only other thing to note is that the value for force used in the tutorial makes your box zip of the screen never to be seen again. Originally I played around with the value until I got one that felt ok.

As you no doubt saw from the original tutorial, impulses seem to work better for this sort of movement anyway, so that's what I've used here instead.


public class Character : PhysicsObject
{
        public float forcePower;
        protected KeyboardState keyState;
        protected KeyboardState oldState;


        public Character(World world, Vector2 position, float width, float height, float mass, Texture2D texture)
            : base(world, position, width, height, mass, texture)
        {
        }

        public virtual void Update(GameTime gameTime)
        {
            HandleInput(gameTime);
        }

        protected virtual void HandleInput(GameTime gameTime)
        {
            keyState = Keyboard.GetState();

            //Apply force in the arrow key direction
            Vector2 force = Vector2.Zero;
            if (keyState.IsKeyDown(Keys.Left))
            {
                force.X -= forcePower * (float)gameTime.ElapsedGameTime.TotalSeconds;
            }
            if (keyState.IsKeyDown(Keys.Right))
            {
                force.X += forcePower * (float)gameTime.ElapsedGameTime.TotalSeconds;
            }
            if (keyState.IsKeyDown(Keys.Up))
            {
                force.Y -= forcePower * (float)gameTime.ElapsedGameTime.TotalSeconds;
            }
            if (keyState.IsKeyDown(Keys.Down))
            {
                force.Y += forcePower * (float)gameTime.ElapsedGameTime.TotalSeconds;
            }

            body.ApplyLinearImpulse(force, body.Position);

            oldState = keyState;
        }
}

Most of the changes here are ones we have come across before, such as the use of Fixtures. However there are quite a few minor tweaks that need to be made, so I've included the whole of the source here in the text, as well as attaching the file below.

Other than again remembering to keep track of when you need to use pixels and when you need to use meters, the only other thing to note here is that body.LinearVelocity has to be replaced in its entirety. You can't change the X and Y components individually as you could in 2.1.3.


public class CompositeCharacter : Character
{
        public Fixture wheel;
        public FixedAngleJoint fixedAngleJoint;
        public RevoluteJoint motor;
        private float centerOffset;

        public Activity activity;
        protected Activity oldActivity;

        private Vector2 jumpForce;
        private float jumpDelayTime;

        private const float nextJumpDelayTime = 1f;
        private const float runSpeed = 8;
        private const float jumpImpulse = -500;

        public CompositeCharacter(World world, Vector2 position, float width, float height, float mass, Texture2D texture)
            : base(world, position, width, height, mass, texture)
        {
            if (width &gt; height)
            {
                throw new Exception("Error width &gt; height: can't make character because wheel would stick out of body");
            }

            activity = Activity.None;

            wheel.OnCollision += new OnCollisionEventHandler(OnCollision);
        }

        protected override void SetUpPhysics(World world, Vector2 position, float width, float height, float mass)
        {
            //Create a fixture with a body almost the size of the entire object
            //but with the bottom part cut off.
            float upperBodyHeight = height - (width / 2);

            fixture = FixtureFactory.CreateRectangle(world, (float)ConvertUnits.ToSimUnits(width), (float)ConvertUnits.ToSimUnits(upperBodyHeight), mass / 2);
            body = fixture.Body;
            fixture.Body.BodyType = BodyType.Dynamic;
            fixture.Restitution = 0.3f;
            fixture.Friction = 0.5f;
            //also shift it up a tiny bit to keey the new object's center correct
            body.Position = ConvertUnits.ToSimUnits(position - (Vector2.UnitY * (width / 4)));
            centerOffset = position.Y - (float)ConvertUnits.ToDisplayUnits(body.Position.Y); //remember the offset from the center for drawing

            //Now let's make sure our upperbody is always facing up.
            fixedAngleJoint = JointFactory.CreateFixedAngleJoint(world, body);

            //Create a wheel as wide as the whole object
            wheel = FixtureFactory.CreateCircle(world, (float)ConvertUnits.ToSimUnits(width / 2), mass / 2);
            //And position its center at the bottom of the upper body
            wheel.Body.Position = body.Position + ConvertUnits.ToSimUnits(Vector2.UnitY * (upperBodyHeight / 2));
            wheel.Body.BodyType = BodyType.Dynamic;
            wheel.Restitution = 0.3f;
            wheel.Friction = 0.5f;

            //These two bodies together are width wide and height high :)
            //So lets connect them together
            motor = JointFactory.CreateRevoluteJoint(world, body, wheel.Body, Vector2.Zero);
            motor.MotorEnabled = true;
            motor.MaxMotorTorque = 1000f; //set this higher for some more juice
            motor.MotorSpeed = 0;

            //Make sure the two fixtures don't collide with each other
            wheel.CollisionFilter.IgnoreCollisionWith(fixture);
            fixture.CollisionFilter.IgnoreCollisionWith(wheel);

            //Set the friction of the wheel to float.MaxValue for fast stopping/starting
            //or set it higher to make the character slip.
            wheel.Friction = float.MaxValue;
        }

        //Fired when we collide with another object. Use this to stop jumping
        //and resume normal movement
        public bool OnCollision(Fixture fix1, Fixture fix2, Contact contact)
        {
            //Check if we are both jumping this frame and last frame
            //so that we ignore the initial collision from jumping away from 
            //the ground
            if (activity == Activity.Jumping &amp;&amp; oldActivity == Activity.Jumping)
            {
                activity = Activity.None;
            }
            return true;
        }

        protected override void HandleInput(GameTime gameTime)
        {
            oldActivity = activity;
            keyState = Keyboard.GetState();

            HandleJumping(keyState, oldState, gameTime);

            if (activity != Activity.Jumping)
            {
                HandleRunning(keyState, oldState, gameTime);
            }

            if (activity != Activity.Jumping &amp;&amp; activity != Activity.Running)
            {
                HandleIdle(keyState, oldState, gameTime);
            }

            oldState = keyState;
        }

        private void HandleJumping(KeyboardState state, KeyboardState oldState, GameTime gameTime)
        {
            if (jumpDelayTime &lt; 0)
            {
                jumpDelayTime += (float)gameTime.ElapsedGameTime.TotalSeconds;
            }

            if (state.IsKeyUp(Keys.Space) &amp;&amp; oldState.IsKeyDown(Keys.Space) &amp;&amp; activity != Activity.Jumping)
            {
                if (jumpDelayTime &gt;= 0)
                {
                    motor.MotorSpeed = 0;
                    jumpForce.Y = jumpImpulse;
                    body.ApplyLinearImpulse(jumpForce, body.Position);
                    jumpDelayTime = -nextJumpDelayTime;
                    activity = Activity.Jumping;
                }
            }

            if (activity == Activity.Jumping)
            {
                if (keyState.IsKeyDown(Keys.Right))
                {
                    if (body.LinearVelocity.X &lt; 0)
                    {
                        body.LinearVelocity = new Vector2(-body.LinearVelocity.X * 2, body.LinearVelocity.Y);
                    }
                }
                else if (keyState.IsKeyDown(Keys.Left))
                {
                    if (body.LinearVelocity.X &gt; 0)
                    {
                        body.LinearVelocity = new Vector2(-body.LinearVelocity.X * 2, body.LinearVelocity.Y);
                    }
                }
            }

            
        }

        private void HandleRunning(KeyboardState state, KeyboardState oldState, GameTime gameTime)
        {
            if (keyState.IsKeyDown(Keys.Right))
            {
                motor.MotorSpeed = runSpeed;
                activity = Activity.Running;
            }
            else if (keyState.IsKeyDown(Keys.Left))
            {
                motor.MotorSpeed = -runSpeed;
                activity = Activity.Running;
            }

            if (keyState.IsKeyUp(Keys.Left) &amp;&amp; keyState.IsKeyUp(Keys.Right))
            {
                motor.MotorSpeed = 0;
                activity = Activity.None;
            }
        }

        private void HandleIdle(KeyboardState state, KeyboardState oldState, GameTime gameTime)
        {
            if (activity == Activity.None)
            {
                activity = Activity.Idle;
            }
        }

        public override void Draw(SpriteBatch spriteBatch)
        {
            //These first two draw calls draw the upper and lower body independently
            spriteBatch.Draw(texture, new Rectangle((int)ConvertUnits.ToDisplayUnits(body.Position.X), (int)ConvertUnits.ToDisplayUnits(body.Position.Y), (int)width, (int)(height - (width / 2))), null, Color.White, body.Rotation, origin, SpriteEffects.None, 0f);
            spriteBatch.Draw(texture, new Rectangle((int)ConvertUnits.ToDisplayUnits(wheel.Body.Position.X), (int)ConvertUnits.ToDisplayUnits(wheel.Body.Position.Y), (int)width, (int)width), null, Color.White, wheel.Body.Rotation, origin, SpriteEffects.None, 0f);

            //This last draw call shows how to draw these two bodies with one texture (drawn semi-transparent here so you can see the inner workings)            
            spriteBatch.Draw(texture, new Rectangle((int)Position.X, (int)(Position.Y), (int)width, (int)height), null, new Color(1, 1, 1, 0.5f), body.Rotation, origin, SpriteEffects.None, 0f);
        }

        public override Vector2 Position
        {
            get
            {
                return (ConvertUnits.ToDisplayUnits(body.Position) + Vector2.UnitY * centerOffset);

            }
        }
}


Afterthoughts 

Well, hopefully this have given you enough to take what you learned in the original tutorial and carry it on into Farseer 3.2. As I said earlier, you will need to decide for yourself whether the potential confusion with using meters as opposed to pixels as the units for your physics simulation is worth it to benefit from the new features that 3.2 offers.

It may seem tricky, but as long as you keep track of which units you need to use where, its fairly easy to overcome.