FLOSS Game Dev

Enabling Creativity


Leave a comment

LMMS 1.2.0 Released

Finally, after four years of development, the LMMS Digital Audio Workstation has been officially updated to version 1.2.0! This update brings about major UI improvements, tons of bug fixes, and the application itself is now distributed as an AppImage for those of us using GNU/Linux. You can just download it and give it a go. Convenient! 🙂

The LMMS AppImage can be downloaded directly from the LMMS website, or you can grab the application’s source over on Github, if you prefer.

Happy music-making, everyone!

Cheers! 🙂


Leave a comment

Update: 9-8-16

Guten tag!

My time in Boston, MA went off without a hitch. The RISC-V conference was extremely informative, and I made some very important connections while over there. Gaming hardware companies like NVIDIA were present, showcasing their new RISC-V based graphics architecture. I don’t know how soon the public will get their hands on it, but hey, at least it’s something, right?

In other news, I’m currently getting my Blender skills up to speed with some videos on YouTube. I recommend checking out Darrin Lile, who is actually a certified Blender Instructor. His videos are VERY comprehensive, and they gently guide you along the tedious process of modeling, texturing, rigging, and animating. I feel a lot more confident now that I’ve watched some of his work.

I’ve heard some rumors that a new Blender update may be just around the bend. >_> Also, in checking out the LMMS GitHub page, it looks like this Digital Audio Workstation is inching closer to its 1.2.0 release. Exciting things are coming up!

Cheers! 🙂


Leave a comment

One Track Mind: Preparing Your Audio for 3D Games

Hello!

First off, I want to express my condolences to those who’ve suffered as a result of the Paris attacks that just happened on the 13th. As an American of French descent, my heart and spirit go out to you during this trying time.

Today we will be covering a very important aspect of 3D game development… Audio! If you’ve ever wanted to create a game so immersive that you feel like you’re actually there, you know that you’re going to need sounds to come from every possible direction of the aural spectrum.

Thankfully, most 3D Audio APIs already provide this functionality, and one of them (OpenAL) just happens to be cross platform! Joy! 😀

To begin with, it’s important that you understand the difference between stereophonic and monaural sound. Stereophonic sound utilizes two channels to create the illusion of depth, whereas Monaural sound only contains one channel (as indicated by the title of this article) and is only played from one position.

How does this factor into a 3D game?

OpenAL can handle both three-dimensional and two-dimensional audio tasks with relative ease. For your own purposes, though, you’ll most likely use stereophonic audio for background music and sound effects that are localized to the player’s user experience (going through menus, or anything not directly-related to the action on screen). You’ll use monaural audio, instead, for all in-game sound effects, and/or positional dialogue exchanged between the Player-Character (PC) and Non-Player Characters (NPCs), thereby creating a natural feeling of depth.

So, now that we understand which types of sound assets are best fit for 3D manipulation, why do they need to be Monaural in the first place?

OpenAL needs assets meant to be manipulated in 3D space to be monaural because only monaural tracks have the ability to be processed that way (i.e. have their “center” adjusted) by the API. A monaural track is, by design, intended to be broadcast from one point specifically. OpenAL, by utilizing its three-dimensional audio-processing capabilities, allows you to manually define what that point can be. In the case of stereophonic audio, on the other hand, the file in question has already been processed to create faux three-dimensional depth prior to being consumed by the API. Thus, wherever OpenAL tries to play stereophonic sound in 3D space, the result will sound the same, regardless.

My audio content pipeline is currently the following:

-Make music/audio in Linux MultiMedia Studio

-IF it’s music, compress it to .OGG when exporting

-IF it’s a sound-effect, don’t compress it (use .WAV)

-IF I have sound effects, open Audacity

–Open the .WAV sound effect files within Audacity (one at a time)

–Manually remove the second track of each .WAV sound effect file using (“Tracks” -> “Stereo Track to Mono”)

–Export each audio track with (“File” -> “Export Audio…”) and save it again as a .WAV file

At that point, your audio should be ready for 3D manipulation by any three-dimensional audio API! Time to celebrate! 🙂

As always, questions and comments are welcome.

Vive la France!  🙂


Leave a comment

Update: 9-13-15

Hello!

Been a busy couple of weeks since I last posted.

I’ve been doing more research into MMORPG development and am currently looking into Erlang for the development of the game server. Given that the Erlang environment already contains a key-value store called “Mnesia“, I’ve been searching for a similarly-designed persistent store for the client side. So far, my findings have uncovered a solution called “Redis“. Redis has a C client named “Hiredis” that I can plug into the C++ code that I’ve written for my game. Both solutions (for client and server) are cross-platform to boot!

I’m still in the content-creation phase as well… So, I’m definitely wearing a lot of hats at the moment. 🙂

On the music-composition front, I can whole-heartedly recommend that GNU/Linux users of Linux MultiMedia Studio (LMMS) just download and compile the source from the Master branch of the project’s GitHub page. In trying to work with version 1.1.3, I found that the application kept crashing whenever I tried to compose any music utilizing Soundfont files. An instance of LMMS built using the Master branch’s code has been running much smoother.

I’m also trudging my way through some Blender tutorials at the moment. Once I’ve finished this animation one, I should be able to demonstrate the effectiveness of my game framework’s renderer, and start testing it alongside my recently-finished 3D Audio solution. 🙂

As always, I’ll keep you posted on any interesting news, good tutorials, or random things I think you should know. 😀

Cheers!


Leave a comment

Update: 2-8-15

Hello!

Just wanted to post something so people didn’t think the blog was dead… it isn’t!

I’m currently VERY busy due to starting a new job, so I’m finding myself spending most of my time focused upon that.

If readers have any questions regarding Linux, or the pipeline tools (LMMS, Blender, GIMP, etc.), don’t hesitate to reach out; this blog exists for you! 🙂

Cheers!


Leave a comment

The Pipeline: An Overview

Good evening!

So, as you may have already noticed, the title of this Blog is “The Open-Source Game Development Pipeline”. Tonight, in this article, I’m going to talk about the actual pipeline.

There are three major components to creating a game: Audio, Image Assets, and a Game Engine to house and utilize them.

I’ve already mentioned various packages in one of my previous articles, and you’re going to need to learn how to use them if you want to make games!

So, let’s talk about the applications you’ll be using to create a game.

~AUDIO~

Linux is unique when it comes to sound design. JACK (package name “jackd”) is the standard audio interface for having your audio applications communicate with any internal/external sources. Learning how JACK works (which the “qjackctl” package can assist you with) will give you an advantage if you’re just starting out.

You’ll find (as you continue using Linux) that there are very few audio applications that can “do it all”. Most applications excel specifically at one thing. Take Hydrogen (package name “hydrogen”) for example. Hydrogen is an application that is purpose-built for creating beats using different sets of instruments. As a drum-machine, it can’t be beat (no pun intended), but creating an enticing percussive rhythm is just one part of a musical composition (unless that’s all you’re looking to do, then, hey, kudos!). There’s also melody and harmony, and there are other applications that excel in this department.

There are three applications that I’d specifically recommend for those of you who happen to be passionate musicians. They are: Linux MultiMedia Studio (package name “lmms”), Qtractor (package name “qtractor”), and Ecasound (package name “ecasound”). Both LMMS (I’ll be using the abbreviation for the rest of the article) and Qtractor are DAWs (Digital Audio Workstation), and have an interface built to handle most of the heavy-duty requirements needed by any serious sound designer. LMMS has the added benefit of having a beat-mixer built into it as well, potentially mitigating the need to use Hydrogen. But, since Hydrogen already has so much depth, those who need that much control will use it. Ecasound will cater specifically to musicians who want to rip sound directly from an instrument they’ve plugged into their computer. The application is run from the command-line, but is easy enough to figure out if you use the “-help” command. You should pair your usage of this application with Audacity (package name “audacity”) for any post-processing or editing needs.

Note: Downloading the CALF Plugins (package name “calf-plugins”) will give you some additional LV2 effects/instruments you can use inside of QTractor. This package also interfaces with JACK, for those of you who would like to do direct hardware recording via Ecasound.

~GRAPHICS~

So, unless you’re creating a command-line game, you’ll probably want to display some sort of visual content inside of its own window. In that case, you’re going to need tools to help create it.

Blender (package name “blender”) is a 3D modeller and animation tool you can use to create 3D characters/scenes/objects for your game. Paired up with GIMP (package name “gimp”), you can create nice textures to map onto your 3D shapes. If you prefer a more cell-shaded/vector art look, one could always use Inkscape (package name “inkscape”) to achieve those results.

~ENGINE~

I’ve done my fair share of homework over which tools to use to create my game, and in the end, I’ve chosen to use libraries that are either BSD, MIT, Apache 2.0, or zlib licensed. The reason for this is to make sure I can make money off of my game if I want to, and also not step on anybody’s toes for using their tools. Out of respect for the licenses themselves, I will be including a description of each inside a text file called “LICENSE”, which will be housed in the application’s root directory for users to see.

There are capabilities the engine will need to have in order to effectively deliver various types of game-play experiences. Here’s a list of some that I consider to be the most important (ordered appropriately):

-3D/2D rendering (fast, preferably)

-3D and 2D audio

-Joystick input

-Physics simulation

-Threading interface

-Networking interface

I’m currently looking into using “Irrlicht” for the graphical/input requirements of the game engine. Since Irrlicht has OpenGL and SDL integrated within it, the input interface has already been properly abstracted for you. In other words, using Irrlicht allows you to kill 2 birds with one stone… so why not?

Next on the list, allow me to explain the difference between two-dimensional and three-dimensional audio. Ambient noise, or background music, are examples of sound that occupy a static portion of the soundscape, and typically don’t need any depth added to them (or, in some cases, it’s already been added when the audio files were created). If I were making a game about war, though, and my player character was a soldier traversing a battlefield while under heavy enemy fire, I would need three-dimensional audio to simulate the different positions of gunfire (or fired mortar shells, for instance). This would require an API (Application Programming Interface) which could achieve that effect. Thankfully, there’s already a handy wrapper around OpenAL Soft called “TinyOAL” “cAudio” which I will be using to create 3D audio.

Next on the list is input. As I stated before, SDL handles this for us, so no need to go into detail there. SDL also comes with a cross-platform threading interface built-in, so I won’t have to worry about that either (now we’re up to three birds). Thanks, Irrlicht! I’ll need to do some more homework on whether or not Irrlicht will let you expose the SDL layer it uses for input and see if it also gives you access to threads. Otherwise, we may have to look into using PThreads… >_>

Now that leaves us with Physics simulation and Networking. As far as C++ goes, I’m new to programming with both of these, so I plan on giving Bullet a try and seeing how that plays out. As for networking, there are several high-level wrappers (nested above BSD socket interfaces) that should provide me with an easy way to connect to a remote server. As of now, I’m undecided if this functionality is really necessary for what I’d like to do… so, I’m going to put it on the shelf for the time being.

As stated in the title, this is just an overview. Each of these subjects require much more emphasis in order to be truly helpful.

In spite of that, however, I hope what I’ve written is enough to whet your appetite! If something doesn’t make sense, look it up! Web-searching is your friend. 🙂

That’s it for now! Please reach out if you have any questions or require further guidance. See you next time.