Emojis in LibGDX

Yair Morgenstern
8 min readFeb 10, 2024

How to add your own custom images to LibGDX fonts

LibGDX comes pre-built with much good, including a lot of magic regarding fonts. Behind the scenes, a font is just a way of getting character images and rendering them — so to add our own custom characters, all we need to do to is take control of the functions that return those characters, and have them return our characters instead.

We implemented this in Unciv with these classes, allowing us to leading to images such as this

Bottom text showcases Building, Tech, and Stat icons — buttons showcase hourglass icon for turns

So how?

The base class for fonts is BitmapFont.BitmapFontData, which is what we’ll want to override :)

It’s built so well regarding image packing, regions, rendering etc, that we’ll only want to override 2 functions: getGlyph() and getGlyphs().

The function definition of “getGlyph(ch:Char): BitmapFont.Glyph” tells us everything. It’s how, when rendering text (a bunch of chars), we determine the individual image to render for each one. If we override that, we can return our own images! Fantastic, so why is this an entire post?!

A. If you want to be space-efficient you want to generate the image once and reuse it — you can use BitmapFontData functions for that but you need to know what!

B. Converting UI elements to usable glyphs is surprisingly gnarly.

Where are your glyph images stored?

BitmapFontData assumes that each Glyph (single character of font) is linked to a TextureRegion — that is, “take this specific rectangle within a larger image”. This is for rendering performance — for the full story see “Minimize texture swapping” here.

Chances are, you’re using one of two things — either a pre-packed bitmap font — which is what BitmapFontData is made for — or a dynamically-generated FreeTypeFont, using FreeTypeFontGenerator, which generates your packed image on the go. Both are great!

Since we’re adding in custom characters, you could go the pre-packed route. We needed to generate characters based on dynamically loaded data (from mods), so we went the second route — building the images as we go.

To do this, NativeBitmapFontData holds a LibGDX PixmapPacker. As its name suggests, this packs our images together for the rendering performance. So the actual pixmap data is held in-memory, and the references to that are held in the BitmapFontData.

Show me the code!

Our getGlyph function looks like:

override fun getGlyph(ch: Char): BitmapFont.Glyph = super.getGlyph(ch) ?: createAndCacheGlyph(ch)

private fun createAndCacheGlyph(ch: Char): BitmapFont.Glyph {
val charPixmap = getPixmapFromChar(ch)

val glyph = BitmapFont.Glyph()
glyph.id = ch.code
glyph.width = charPixmap.width
glyph.height = charPixmap.height
glyph.xadvance = glyph.width

// Check alpha to guess whether this is a round icon
// Needs to be done before disposing charPixmap, and we want to do that soon
val assumeRoundIcon = charPixmap.guessIsRoundSurroundedByTransparency()

val rect = packer.pack(charPixmap)
charPixmap.dispose()
glyph.page = packer.pages.size - 1 // Glyph is always packed into the last page for now.
glyph.srcX = rect.x.toInt()
glyph.srcY = rect.y.toInt()

// Reader, ignore this for now - we have special rules for round icons to make them look good :)
if (ch.code >= FontRulesetIcons.UNUSED_CHARACTER_CODES_START)
glyph.setRulesetIconGeometry(assumeRoundIcon)

// If a page was added, create a new texture region for the incrementally added glyph.
if (regions.size <= glyph.page)
packer.updateTextureRegions(regions, filter, filter, false)

setGlyphRegion(glyph, regions.get(glyph.page))
setGlyph(ch.code, glyph)
dirty = true

return glyph
}

Every time we’re asked for a glyph, first let’s check if we already have it in the BitmapFontData list of glyphs, if so hooray!

If not, we need to create it! We need to get the image data (we’ll get to that later) and add the metadata so the font knows how to render it.

We’ll then pack it into our large image, dispose of the original to not leak memory, and tell BitmapFontData that it exists using setGlyphRegion() and setGlyph() functions.

The ‘dirty’ bit is used so we only do heavy-duty packing functions once, if we’re rendering a large piece of text, which brings us to our second function:

override fun getGlyphs(run: GlyphLayout.GlyphRun, str: CharSequence, start: Int, end: Int, lastGlyph: BitmapFont.Glyph?) {
packer.packToTexture = true // All glyphs added after this are packed directly to the texture.
super.getGlyphs(run, str, start, end, lastGlyph)
if (dirty) {
dirty = false
packer.updateTextureRegions(regions, filter, filter, false)
}
}

Okay, but how do I get the Pixmap?

So far for the easy part — assuming we can generate the Pixmap, we can pack it for rendering performance, “wrap it” in Glyph data, and let BitmapFontData do its magic. But where does that come from?

We use 2 sources — one for images added to the game specifically for in-game text (such as the hourglass), and another for on-the-fly images generated from UI Actors.

Predefined images

The first is pretty simple — since you have predefined images, you can find a character that you’re sure will be unused and use them for that.

You just need to get the Drawable from your regular TextureAtlas (which you should be using for performance), or wherever you usually get images for Image actors. Once you have the Drawable, get the Region (“this rectangle in this large image”) and use this function — the font metrics parts are so the image will “look nice” inside your font (will line up with the other characters), if you don’t want to worry about that for now don’t worry about it and just delete those parts.

I went through “texture region to pixmap conversion” hell so you don’t have to, you’re welcome :)

Dynamically generated images

The real kicker, though, is the ability to take arbitrary Actors and render them into static images, that you can then use within your regular texts.

The trick here is to use a FrameBuffer to “render” your actors onto, but keep it around for reuse — otherwise you get nasty memory leaks!

private val frameBuffer by lazy {
// Size here is way too big, but it's hard to know in advance how big it needs to be.
// Gdx world coords, not pixels.
FrameBuffer(Pixmap.Format.RGBA8888, Gdx.graphics.width, Gdx.graphics.height, false)
}
private val spriteBatch by lazy { SpriteBatch() }
private val transform = Matrix4() // for repeated reuse without reallocation

/** Get a Pixmap for a "show ruleset icons as part of text" actor.
*
* Draws onto an offscreen frame buffer and copies the pixels.
* Caller becomes owner of the returned Pixmap and is responsible for disposing it.
*
* Size is such that the actor's height is mapped to the font's ascent (close to
* ORIGINAL_FONT_SIZE * GameSettings.fontSizeMultiplier), the actor is placed like a letter into
* the total height as given by the font's metrics, and width scaled to maintain aspect ratio.
*/
fun getPixmapFromActor(actor: Actor): Pixmap {
val (boxWidth, boxHeight) = scaleAndPositionActor(actor)

val pixmap = Pixmap(boxWidth, boxHeight, Pixmap.Format.RGBA8888)

frameBuffer.begin()

Gdx.gl.glClearColor(0f,0f,0f,0f)
Gdx.gl.glClear(GL20.GL_COLOR_BUFFER_BIT)

spriteBatch.begin()
actor.draw(spriteBatch, 1f)
spriteBatch.end()
Gdx.gl.glReadPixels(0, 0, boxWidth, boxHeight, GL20.GL_RGBA, GL20.GL_UNSIGNED_BYTE, pixmap.pixels)
frameBuffer.end()

return pixmap
}

Do not ask how long it took to get to this magic formula.

But wait, we’re missing something — what’s that scaleAndPositionActor()? Oh it’s probably nothing, just making sure that the image will line up with the rest of the text.

…I lied, there’s actually some magic bullshit going on, but for your My First Image you can replace that with `return actor.width to actor.height` — that function is a nice-to-have, late-stage improvement for nice display.

Wait, what char are we talking about?! This is a CUSTOM emoji!

Okay, you caught me, we actually glossed over another important aspect. When you have a string of text to render, each char encapsulates an idea of some image you intend to render. If you have a custom image, that means you need a custom char.

When talking about predefined images, you can find some char and predetermine “A means Apple” — and you can keep A is a const, to reference it for nice strings, like “${Fonts.Apple}”. But you can’t do that for dynamically generated images!

We solved this by

  • Starting a counter at the beginning of a Unicode Private Use Area (we’ll never get to 5K emojis!)
  • Keeping a hashset of “name of object” to “char allocated”
  • Every object we want to assign, we add to the hashset and increment the counter — this means we can easily “reset” to use an entirely different set of emojis!
  • We need to provide an Actor, for the eventual rendering — so the inputs are

// See https://en.wikipedia.org/wiki/Private_Use_Areas
// char encodings 57344 to 63743 (U+E000-U+F8FF) are not assigned
internal const val UNUSED_CHARACTER_CODES_START = 57344
private const val UNUSED_CHARACTER_CODES_END = 63743

val rulesetObjectNameToChar = HashMap<String, Char>()
val charToRulesetImageActor = HashMap<Char, Actor>()
private var nextUnusedCharacterNumber = UNUSED_CHARACTER_CODES_START

fun addChar(objectName: String, objectActor: Actor) {
if (nextUnusedCharacterNumber > UNUSED_CHARACTER_CODES_END) return
val char = Char(nextUnusedCharacterNumber)
nextUnusedCharacterNumber++
rulesetObjectNameToChar[objectName] = char
charToRulesetImageActor[char] = objectActor
}

This does mean, that every time you want to ADD an emoji in-text of an object, you just need to look up its associated char in rulesetObjectNameToChar and add it to your String.

Putting it all together

The getPixmapFromChar that we references in the very first code block, looks like this:

private fun getPixmapFromChar(ch: Char): Pixmap {
return when (ch) {
in Fonts.allSymbols -> getPixmapForTextureName(Fonts.allSymbols[ch]!!)
in FontRulesetIcons.charToRulesetImageActor ->
try {
// This sometimes fails with a "Frame buffer couldn't be constructed: incomplete attachment" error, unclear why
FontRulesetIcons.getPixmapFromActor(FontRulesetIcons.charToRulesetImageActor[ch]!!)
} catch (_: Exception) {
Pixmap(0, 0, Pixmap.Format.RGBA8888) // Empty space
}
else -> fontImplementation.getCharPixmap(ch)
}
}
  • If it’s a predefined image, get the pixmap from the image
  • If it’s registered to some actor, render the actor and give me that image
  • Otherwise, this is a regular character, give me the character pixmap for the regular font (depending on your implementation you may not want this here — this assumes that the regular text will be packed together with the emojis)

OBVIOUSLY, there’s a lot here that’s custom-built for our application.

But there’s a lot more that is definitely globally applicable.

  • Using FrameBuffer to render Actors to Pixmap
  • Converting TextureRegions to Pixmap
  • Wrapping Pixmaps in Glyphs and overriding BitmapFontData for text rendering
  • Using Unicode Private Use Areas for chars for your custom images

A lot of this is super technical, but I hope this helps anyone else going on this journey! ❤️

Aside — Native font generation

I personally generate the glyphs not using a prebuilt LibGDX font. This approach comes to solve a specific problem, namely — I need to provide fonts for many different languages, including Chinese Japanese and Korean, and the fonts are very large! To limit the size of download, I utilize the fonts present in the device, implementing the functions for Android and Desktop separately, to allow users to pick the font they want from their device.

This does have a downside that if the users have funny fonts, their text will display funny — so if you’re aiming for a specific look & feel this may not be for you, but it does work for us :)

--

--

Yair Morgenstern

Creator of Unciv, an open-source multiplatform reimplementation of Civ V