Updated 2024-03-13, Game Server ver. 6.028, for Tanvi and any future research team members
This is your situation:
The rest of this document will list the suggested steps for you to undertake to make the above plan a success, with links to more detailed documentation.
You'll want to have a local instance of the Rule Game Server, both for the quick-cycle development work (so that you won't need to push updates to GitHub and deploy them to Plesk hosts every time you edit a rule set file or a trial list file). This also will come handy at the analysis stage, since the analysis scripts use the same Java JAR files that the Game Server does.
To do this, follow the instructions in the Setup Guide, Option A (installing from WAR files). This will involve installing the MySQL database server, installing the Apache Tomcat server, creating a master config file, and downloading two WAR files for use with Tomcat.
The instructions mention geting experiment control files from GitHub, and links to more detailed instructions: experiment control file setup guide. Since you're planning to add your new experiment control files to the main GitHub repository for them, it may be the easiest for you to directly link the Game Server's experiment control file directory (/opt/w2020/game-data) to the GitHub repo (instead of using a space under your home directory as a staging area, and copying files from there to /opt/w2020/game-data):
cd /opt/w2020 mkdir game-data cd game-data git init git remote add origin https://github.com/RuleGame/Rule-Game-game-data.git git pull origin master
Once everything has been installed, you should be able to access Tomcat with the localhost URL. (Presumably, at port 8080, if you have not changed any default settings). Thus, http://localhost:8080/ will be just Tomcat's own info page; http://localhost:8080/w2020 will be the main Rule Game documentation page. At that page, you will be able to find the launch pages (under the heading "The front-end (GUI) tools"), such as the main prod launch page at http://localhost:8080/w2020/front-end-form.jsp, as well as the specialized launch page for your (Tanvi's) project.
Most of your development work will consist of editing files in your experiment control directory, then trying them out by going to the specialized launch page for your (Tanvi's) project or directly to the underlying URL, http://wwwtest.rulegame.wisc.edu/w2020/front-end-form-2.jsp?exp=U/JF/tht/exp1&prefix=RU-JF-THT-&dev=false. (Change the host name and the experiment plan name as appropriate). For quicker testing, since you probably don't want to go through the intro pages every time, you can check the "No intro" button on the form.
After you have created or modified some rule set files etc, there are two preliminary steps before you start testing. First, you may want to validate the updated rule set in the Validation form. It is not fool-proof, but it catches some common syntax errors. Second, you may want to force the server
Once you have decided that the current version of your experiment control files works well enough to be checked into the GitHub repo, to be later deployed to a Plesk host, you can check it in:
git add myNewFile-1.txt myNewFile-2.csv ... git commit -a -m 'My commit message' git push origin master
Testing your rule sets and trial list files on your localhost is convenient, but the trouble is, only you can see them and nobody else. To enable other members of your team, and, eventually, a wider player population you invite, to see your games, you need to deploy them to one of the public-facing Plesk hosts at UW. (We call them "Plesk hosts" because, unlike most other computers you deal with on daily basis, one uses the Plesk interface to control one's web site deployed to them).
In brief, once you have your UW netid and have been granted access to the Plesk console, you use the Plesk control panel to deploy your experiment control files to a Plesk host of your choices, whenever you so desire. For details, see: Setting up the Rule Game Server on a DoIT Shared Hosting host#Experiment control files
Which of the two Plesk hosts to use? I would suggest you use the dev host (wwwtest.rulegame.wisc.edu) when you just want to show your game to other team members, and maybe collect some playing data from them. This can be done as frequently as you desire. You use the prod host (rulegame.wisc.edu) when you feel that the experiment is fully ready to start inviting outside players, so that you can start accumulating "real data" for subsequent analysis. The URL you'll give to the outside players will be prod-host-based, e.g. http://rulegame.wisc.edu/w2020/front-end-form-2.jsp?exp=RU/JF/tht/exp1&prefix=RU-JF-THT-
When people play games on a Rule Game Server, the server accumulates data describing all the particulars of each episode played: what the initial board was, what moves the player attempted, what the outcome of each move was. Various auxiliary information, such as the player's guesses (which the GUI client solicits at the end of each episode), and the demographic info (from the final questionnaire) are recorded as well. As our Data Guide explains, different kinds of data are saved in different ways: some are recorded in MySQL database table (the "read-and-write data") while others go to CSV files ("write-only data"). For most of your analyses, you'll need both.
Local. As you play games on localhost, the Game Server you run on your laptop keeps saving data on your server. To see the location of your saved data, take a look at your master configuration file (in /opt/w2020/w2020.conf). The important lines are the following:
#----The Rule Game server saved data directory. #--- The Game Server will write transcript files etc to it. FILES_SAVED = "/opt/w2020/saved"; #---- The name of the database on you MySQL server that your Rule Game Server uses JDBC_DATABASE = "game";The first variable tells you where the CSV data files are; the second, the name of the database where the SQL tables are.
Imported. Once you have the UW netid and granted access to the Plesk hosts, you can start importing saved data from these two hosts, in order to analyze them locally. The process is described in the Pull guide. As the Pull guide explains, when you pull data from a remote server, the pull script puts them in their own separate places: a new database, and a new data directory. These two variables are saved in new configuration file in your current directory. Keep that configuration file; you'll be passing it as an argument to various analysis scripts (such as the Analyze Transcript tool), so that those scripts will know what data to look at.
There are several ways in which you can look at the saved data (either local or imported ones).
Raw data. The saved CSV files are directly available for your inspection at the saved data directory (either in /opt/w2020/saved for the local data, or in whatever directory you pulled imported data into), and can be processed by any Perl or Python script you care to write.
To view the data in the MySQL database, you can just run a mysql client, and type the queries on its prompt. This is good for a quick check. But how will you feed these data to your analysis scripts or other tools? There are two export options, described below.
Exporting MySQL tables - option 1. (This probably won't work on your MacOS laptop) MySQL, in principle, supports exporting data from database tables into CSV files; this is discussed here: SELECT ... INTO statement. We do have a sample script that does it; see export.sh in the Data Guide. Unfortunately, MySQL is rather finicky with file-writing permissions; while we know that exporting works on our Linux hosts (export.sh works there), this may not necessarily the case on your current MySQL installation on your MacOS computer.
Exporting MySQL tables - option 2. (Recommended for use on your MacOS laptop) We also have a JDBC-based export tool, which should work regardless of the quirks of the local MySQL server installation. See the docs for export-2.sh for details.
We have several tools that can be used to extract the data and analyze them. See the Tools Guide. For example, analyze-transcripts.sh allows you to specify the set of players you are interested in (e.g. "everybody who played experiment plan X"), extracts information from these players' playing from the MySQL server and from the transcript files, cleans it up, and produces a bunch of processed transcript files, one per player.