FrostKiwi's
Secrets

Genshin Impact Anki deck #

Created: 2022.05.03
Last edit: 2023.03.13

It’s quite the tradition among Japanese learners to publish parts of their Anki Mining decks, so others may get inspired by them or straight up use them. This ~1000 note deck is an excerpt of my Mining deck, which was/is being created in-part from the video game Genshin Impact. This post will go into the thought process behind the deck, how it was created and has sound clips below every screenshot for reference. Of course, using someone else’s Mining deck doesn’t carry nearly the same benefit as making one yourself, so this article is mainly to just document my workflow and to provide a jumping-off point for people setting up their own. Link to the deck on Ankiweb (If AnkiWeb ends up pulling the deck due to copyright concerns, a copy is in the release section here)

All cards have in-game sound + screenshot and almost all have additionally a dictionary sound file + pitch accent.
image

Dictionary Audio
In-game Audio

Why Genshin Impact? #

A couple of things come together to make Genshin quite the enjoyable learning experience. The obvious first: Except minor side quests, all dialogs are voiced and progress sentence by sentence, click by click, as is common in JRPGs and visual novels. This gives enough time to grasp the dialog’s content. Funnily enough, in-game time does not stop during dialogs except in some quests, so sometimes multiple in-game days would pass by, as I grasped the contents of a dialog.

Another point is the writing style. A hotly debated topic in the player-base, is whether or not the addition of Paimon hurts the delivery of the story. The character constantly summarizes events happening and repeats commands or requests, that were given by another character just moments ago during dialog. The main criticism often brought up is that this makes the story-flow very child-like, which is a rather obvious design goal of the game - catering to a younger audience. What may be a sore in the eyes of many a player though, is a godsend in the eyes of a language learner.
Paimon often describes a situation, which was witnessed by the player mere moments ago, making the actual statement of a sentence trivial to understand.
image

Dictionary Audio
In-game Audio

And sometimes Paimon straight up becomes a dictionary herself and defines a word like a glossary text would.
image

Dictionary Audio
In-game Audio

There is a very dialog- and lore-heavy story-line in Dragonspine involving Albedo, which has some of the most information dense dialog of the game. Paimon often commented, how she completely lost the plot and didn’t understand anything. During that story line I felt as if Paimon was sympathizing with me, as I battled my way to understanding that story-line. For me, Paimon really made this game shine as a learning tool.

The good outcome #

I’m constantly surprised how much Genshin has propelled my speech forward. Similar to a movie you can quote decades later or video game moments being etched into memory, there is something about the media we consume, that makes it stick. I found myself using and recalling vocabulary acquired via Genshin faster than from other sources. Or maybe it’s just that video games make you engage with their world and characters for far longer and with much more intensity, than other forms of indirect-study could.

The dangerous outcome #

It’s common knowledge that media uses artistic delivery in speech, has speech patterns rarely used in everyday life and uses a stylized way of writing. Basically all of it is 役割語. And yet, knowing that I still managed to trip up in minor ways. Case in point:
image

Dictionary Audio
In-game Audio

I used the 誉れです expression instinctively from time to time and just recently someone noted, that this expression has a rather archaic, regal tone. It was quite the funny situation, but it goes to show, that even knowing what kind of media I was consuming still didn’t save me from tripping up. Coming from outside the language it’s unavoidable to misinterpret an expression’s nuance I guess. Though in this case, the in-game dialog should have really tipped me off, as the character speaking, Ninguang, uses it to tell a joke with a somewhat sarcastic undertone.

Info and Structure #

hitome

Dictionary Audio
In-game Audio

meccha

Dictionary Audio
In-game Audio

Grammar #

I also have a bunch of grammar cards mixed in, when I encountered new pieces of grammar and recognized it as such. For those I pasted the excellent JLPT Sensei summary images.

kireicake

Dictionary Audio
In-game Audio

A big surprise to me was the YomiChan dictionary “KireiCake” having URL-shortened links from time to time, like waa.ai/v4YY in the above card. In this case it leads to an in-depth discussion on Yahoo about that grammar point. (Archive Link, in case it goes down) The love and patience of the Japanese learning online community is truly magnificent. From /djt/ threads on image-boards to user-scripts connecting Kanji learn services to a collection of example recordings from Anime. Stuff like that has me in awe.

To translate or not to translate #

In the beginning I did toggle to English to screenshot the English version for the card’s back-side, see the example card below. However, on recommendation from members of the English-Japanese Language Exchange discord server, I stopped doing so. Mainly, because of localization discrepancies between both versions. Differences got especially heavy, when more stylized dialogue got involved. But also in part, because this is not recommended for mining in general. Quoting from the Core3k description:

Don’t use the field ‘Sentence-English’ in your mined cards. In fact, get rid of it once you have a solid understanding of Japanese. When you mine something you should already have understood the sentence using the additional information on your cards.

image

Dictionary Audio
In-game Audio

How it was captured #

If you are completly new to the Mining workflow, check out AnimeCards.site before jumping into my specific workflow.

The main workhorse of everything is Game2Text, though the setup in the case of Genshin is not straight forward. Game2Text is a locally run server, that opens as localhost in your browser. Game2Text then allows you to combine a couple of things: Capture a window’s content via FireFox’s and Chrome’s native window capture feature, run a region of the window through OCR, like the offline and opensource tesseract or the more powerful online service ocr.Space and finally allow you to translate the text with help of popular plug-ins like YomiChan or the new active community fork YomiTan.

Game2Text has native Anki Connect integration, which builds Anki cards from the captured screenshot, the currently selected word and a dictionary definiton. However, this native Anki integration sometimes fails to recognize phrases, that more complex dictionary suites in YomiChan detect. Luckily Game2Text can essentially give you the best of both worlds, since you can just use directly YomiChan to create a card. Though in that case, you have to handle screenshots yourself.

Genshin fails to get captured by the browser, unless it is in window mode. There is no proper borderless window mode in Genshin unfortunately, unless you use a patcher to get a borderless window. Another option is to run OBS in administrator mode, capture the game, open up a full-screen “source monitor”-window of Genshin’s source signal and let the browser capture that. This is the option I went with on my Desktop PC. On my Laptop with a dedicated Nvidia GPU this leads to massive performance loss however, presumably because of saturating memory bandwidth due to some weird interaction between the iGPU and the main GPU causing a memory bottleneck somehow. In that case I used the borderless gaming patcher.

If Game2Text has a hook-script for the game in question, then it can hook into the game’s memory and read out the dialog strings, forgoing the sometimes imprecise OCR. No such hook-script exists for Genshin Impact (to my knowledge). I tried to create one by poking around with CheatEngine, but there was no obvious strings-block in memory. I decided to drop that approach due to the worry of having my account banned. If OCR is imprecise, there is always the onscreen dialog box to check against.

Handling Audio #

Across multiple systems, Game2Text fails to create a card with sound for me though. It successfully captures sound in .wav files, but transcodes them to fully silent .mp3 files, which it attaches to the cards. So instead of working with the .wav files, I simply let Audacity capture the sound output via its “Windows WASAPI” mode and the thus unlocked speaker loopback record method in a non-stop recording. On dialog I would select the needed passage and via a macro bound to a hot-key, perform the conversion to mono, normalization and export to an mp3 file.

Originally, I set all audio to be normalized based on setting the peak sample to -3db via Audacity. This turned out to be not quite optimal, as the amount of voice profiles is very broad. With peak sample normalization bright and dark voices did not end up playing back at the same loudness, since it does not account for human hearing being more sensitive to some frequencies compared others. I batch-reencoded every audio file to be normalized to -15LUFS loudness instead (Conversion workflow explained here), the more modern approach. Although the difference was subtle, the dialogue sounded a bit more balanced from card to card after that.

It would be optimal to have no music mixed in with the dialog for the sake of cleaner dialog sound during card reviews. However, the music is so incredibly good, that it would have not been even half as enjoyable to go through the game without the music. So often background music plays with the cards. Worst offender being Liyue Harbor’s background track, which manages to drown out dialog during its crescendo.

The actual workflow #

When I did not recognize a word, I would tab out of the game, select the rectangle in Game2Text to get the transcript. I would then use YomiChan to aid me in understanding the sentence. When the “logs” screen of Game2Text properly recognized the phrase in question, I would then use it to create the card. When not, I would use YomiChan and manually post the screenshot. Also, I screenshot YomiChan’s PitchAccent display and paste it into the reading field. Finally, I would tab into audacity, select the needed passage, press the hotkey to export the sound as an mp3 into a folder I had open and Drag&Drop to the current Anki card.

Sometimes there was no dictionary sound reading in the Game2Text log screen, but in YomiChan - or vice-versa. In that case I would play the dictionary sound from the other source, let Audacity capture it and again export that sound passage. (Technically you can get the sound file directly or by creating a duplicate card, but that was too much of a hassle) Finally, if none of the two sources had a dictionary reading, I would manually check the JapanesePod101 dictionary, which surprisingly has a ton of obscure vocab readings as sound files. Just make sure to un-tick ‘most common 20.000 words’, tick ‘Include vulgar words’ and switch the mode from ‘ls’ to ‘Starts with’ to find more complex phrases.

This concludes my little write-up about the Genshin Impact part of my Anki mining deck.