Dextergui provides a graphical user interface to the main functionality of dexter, a package for analysis of educational tests. You can start it from the R(studio) console in the following way:
library(dextergui)
dextergui()
With dextergui you can:
Below we give a brief tour of dextergui. The navigation bar at the top of the page lets you switch between the main components of dexter. We’ll start on the project tab.
The easiest way to familiarize yourself with dextergui is to open an example project, e.g. the verbal aggression dataset, by clicking on example datasets and then choose verbal aggression. This is a rich one booklet dataset with person and item properties and ideal for exploring the gui.
Dexter organizes test data in projects. Projects are miniature databases that are saved in a .Sqlite file and contain all your project data, including but not limited to e.g. items, scoring rules and response data. On the project overview page you can start a new project, open an existing project or browse through an example project. Example project datasets are taken from the packages dexter, psych, sirt and MLCIRTwithin if installed.
All projects created in dexter or dextergui can be opened with the open project button. For now we will focus on starting a new project. When you click start new project you will be asked where to save it. To fill your project with data, you have to take two more steps:
After starting a new project you have to import a scoring rules file. This is because dexter needs to know the scores for every permissible response for each item.
The scoring rules file is an excel or csv file and can have on of two formats
In both cases the first row must contain the exact column names. Below you see examples of the two formats.
item_id | response | item_score |
---|---|---|
S1DoCurse | 0 | 0 |
S1DoCurse | 1 | 1 |
S1DoCurse | 2 | 2 |
S1DoScold | 0 | 0 |
S1DoScold | 1 | 1 |
S1DoScold | 2 | 2 |
item_id | nOptions | key |
---|---|---|
mcItem_1 | 3 | C |
mcItem_2 | 4 | A |
mcItem_3 | 3 | A |
If you have open ended items, these can be accommodated by scoring them before importing. The column response should in that case be equal to the column item_score (i.e. response 0 gets score 0, response 3 gets score 3 and so on).
After selecting a file, dexter will show a preview of the rules. If all looks ok, you should press Import to import the scoring rules.
Once you have imported the rules, the next step is to import response data. To import response data, select the Data import tab in the navigation bar on top of the page.
Response data is imported one booklet at a time. First provide a unique name for the booklet or testform in the field booklet id. Next use the Browse button to select a file with response data. This file can be in excel or csv format and the first row must contain column names.
The column names should be item id’s which each have an exact match to the item id’s in your scoring rules, note that capitalization matters. You can also include person properties like grade or study program. It is also good practice (but not mandatory) to include a column person_id with a unique identification of each person in your data. Each row in your data, apart from the column names, should represent a single person like in the example below.
gender | anger | S1DoCurse | S1DoScold | S1DoShout | S1WantCurse | S1WantScold | S1WantShout |
---|---|---|---|---|---|---|---|
Male | 20 | 1 | 0 | 1 | 0 | 0 | 0 |
Female | 16 | 1 | 2 | 1 | 2 | 2 | 2 |
Female | 18 | 0 | 0 | 0 | 1 | 0 | 0 |
Female | 27 | 2 | 0 | 0 | 2 | 2 | 0 |
Female | 21 | 0 | 0 | 0 | 0 | 1 | 0 |
Female | 21 | 2 | 2 | 0 | 0 | 2 | 0 |
Female | 15 | 0 | 1 | 0 | 2 | 2 | 0 |
Female | 17 | 1 | 1 | 0 | 2 | 2 | 1 |
When you select a file, dexter shows a preview of the response data. Any column with a name that doesn’t match a known item id will be ignored by default. You can import such columns as person properties by clicking the corresponding buttons.
column | import as | values | |
---|---|---|---|
Gender | ignored | Male, Female, Female, Female, Female, Female, Female, Female, Female, Female, … | |
S1DoCurse | item | 1, 1, 0, 2, 0, 2, 0, 1, 1, 1, … | |
S1DoScold | item | 0, 2, 0, 0, 0, 2, 1, 1, 0, 0, … | |
S1DoShout | item | 1, 1, 0, 0, 0, 0, 0, 0, 0, 2, … | |
S1WantCurse | item | 0, 2, 1, 2, 0, 0, 2, 2, 1, 0, … | |
S1WantScold | item | 0, 2, 0, 2, 1, 2, 2, 2, 0, 0, … |
If the preview looks all right you can import the response data by pressing the import button.
In the classical analysis tab you will find two further tabs, booklets and items.
The booklets tab shows a table of classical statistics per booklet and plots of the item total regressions.
booklet_id | n_items | alpha | mean_pvalue | mean_rit | mean_rir | max_booklet_score | n_persons |
---|---|---|---|---|---|---|---|
agg | 24 | 0.888 | 0.339 | 0.527 | 0.468 | 48 | 316 |
The plot shows regressions based on two models: the interaction model is shown with a thicker but lighter line, and the Extended Rasch model is shown with a thinner, darker line. The unsmoothed data is shown as red dots. The curtains are drawn at the 5% lowest and the 5% highest sum scores. This can all be adjusted with the input fields on top of the plot.
The Rasch model fits this item very well, so the two curves practically coincide. This need not be always the case. The interaction model is quite interesting for practice. It shares the conditioning properties of the Rasch model, which means that we can predict the expected item score at each observed total score. Hence, we have the possibility to represent the model and the observed data on the same plot, without tricks or circular reasoning. This makes the interaction model a very useful tool to evaluate the fit of the Rasch or Extended Nominal.
The items tab shows classical statistics for the items on the left, including RiT, RiR and pvalue.
booklet_id | item_id | mean_score | sd_score | max_score | pvalue | rit | rir | n_persons |
---|---|---|---|---|---|---|---|---|
agg | S1DoCurse | 1.082 | 0.807 | 2 | 0.541 | 0.582 | 0.519 | 316 |
agg | S1DoScold | 0.832 | 0.815 | 2 | 0.416 | 0.651 | 0.596 | 316 |
agg | S1DoShout | 0.468 | 0.709 | 2 | 0.234 | 0.520 | 0.460 | 316 |
agg | S1WantCurse | 1.123 | 0.827 | 2 | 0.562 | 0.537 | 0.468 | 316 |
agg | S1WantScold | 0.930 | 0.850 | 2 | 0.465 | 0.593 | 0.528 | 316 |
agg | S1WantShout | 0.712 | 0.777 | 2 | 0.356 | 0.529 | 0.464 | 316 |
agg | S2DoCurse | 1.003 | 0.832 | 2 | 0.502 | 0.590 | 0.527 | 316 |
agg | S2DoScold | 0.684 | 0.780 | 2 | 0.342 | 0.633 | 0.578 | 316 |
agg | S2DoShout | 0.326 | 0.615 | 2 | 0.163 | 0.532 | 0.481 | 316 |
agg | S2WantCurse | 1.222 | 0.772 | 2 | 0.611 | 0.529 | 0.465 | 316 |
The distractor plot on the right shows a non-parametric regression for each response on the test score. In this case a multiple choice item which obviously has poor distractors.
Note: if you spot a key error, the table of responses right under the distractor plot allows you to change the score for a response by clicking on the respective score and changing it.
How do different (groups of) persons achieve the same test score? To answer this question you need to first define person properties and item properties in your data. Then you can make a profile plot (on a per booklet basis).
This plot of the verbal aggression set (see the included examples), shows that, controlled for the same overall verbal aggression test score, Women tend to Shout more and Men tend to Curse and Scold more (Men tend to be more verbally aggressive than Women overall, so it is not true to say that Women tend to shout more in general).
There is also a test for DIF at the item-pair level.
In this tab you can fit an IRT model using Conditional Maximum Likelihood or a Gibbs sampler (Bayesian analysis). For this you need a connected design (i.e. your booklets must contain overlapping items). This is shown in the network plot below.
Clicking the button fit_enorm will fit the Extended Nominal Response Model, which is a polytomous Rasch model where the scores for each response can have any integer value.
Once you have estimated a model you can use it in several ways with the tabs below.
When you have calibrated your model dexter can compute ability estimates for each person. You have a choice of using a Maximum Likelihood Estimate (MLE) or an Expected a Priori (EAP) estimate with choice of a Jeffreys or a normal prior.
Plausible values are random samples from the posterior distribution of ability. They are closer to the true distribution of ability in the population than straightforward ability estimates. Plausible values are continuous, as opposed to ability estimates which are granular because of the discrete nature of test scores. Therefore the distribution is much smoother.
There is an extremely rich selection of plots to see how ability is distributed in your population(s). Enriching your data with person properties (see the projects tab) will let you make the most of these. You can also view or download the estimated abilities per person.
Score-ability tables lets you compute a score-to-ability transformation table for each booklet. This is often desirable, e.g. for automatic systems or score transformation tables supplied with paper tests for scoring by teachers.
After you have fitted a model, the item tabs shows the estimated parameters and a graphical item fit measure.
The extensive online documentation for dexter shows how to do many of the things you can do in the GUI in the R console and explains much more about plausible values, profile plot, DIF and much more.
If you have problems downloading images and data, the solution is to open the application in a browser (use the open in browser button on the top of the screen). Best results are achieved with the Chrome browser, although most others work fine. The exception is currently Microsoft Edge, in which the application does not work very well yet.
If you have another question, encounter a bug or would like to request a feature, please tell us about it on https://github.com/dexter-psychometrics/dexter/issues .