The Inspera Assessment Product Roadmap includes high-level product priorities, and is reviewed and updated every three months. At any time in this cycle, the Roadmap contains diverse priorities that are addressed in development in a flexible and agile manner.
On a high level, the following areas define the direction of Inspera Assessment:
We are on a mission to continuously increase pedagogical freedom in assessments.
Create the assessments that you need, with our state-of-the-art e-assessment tools. Whether it is essay-based assessments in the humanities, or programming and math exercises in STEM.
Administrative staff need fast, transparent and secure assessment workflows. Accessibility, smart marking & learner feedback - all while securing assessment integrity.
Technical staff need a complete end-to-end e-assessment process. Improve security, transparency, exam velocity and decrease human error and bias. No-stress integrations and configurations, while reaching full digitisation.
New grading tool
Grading 2.0 is Inspera's new tool for marking. This new tool for grading and feedback will be released in several beta versions, starting with Beta version 0.1 in autumn 2018.
The grading tool is being rebuilt from scratch to ensure reliability and fairness in marking, meet maturing requirements from complex, global assessment organisations, and to secure product resilience for future needs.
Chief Marker / Moderator Role
A moderated marking and grading process is sometimes necessary for assessment quality assurance, especially in higher education. In order to meet that growing need of groups of markers and their management, Inspera Assessment will introduce a new user role - Moderator.
With a dedicated user role for moderators, it will be possible to perform dynamic, on-screen moderation of the marking process. Moderators will be able to access all or relevant marking made by individual markers or marking committees, and provide comments on exams for review and audit purposes.
The term Assessment refers to the concept of combining multiple tests in a “super-test”. Assessments allow combining and evaluating tests and educational activities in various ways, such as mandatory course work + final exam, sequential tests, or multiple practical tests combined in one final result.
Improved Learning Analytics
In 2019, R&D will continue its work to implement and extend the availability of high-quality and actionable analytics insights in the Inspera platform. Our aim is to make analytics more flexible, more useful, and more customisable, through improved support for classical test theory (CTT) and item response theory (IRT), and through better visualisation.
Analytics will become an important aspect of the assessment workflow, allowing users to evaluate the quality of questions and tests, and to analyse Learner performance at a more granular level.
Data-driven test construction
We are building tools to help our customers construct tests in smart(er) ways. 2019 will see the release of our Linear-on-the-fly test (LOFT) construction engine, which allows users to create multiple but psychometrically equivalent test forms prior to test-taking. In addition, improved support for question analytics and distractor analysis will help customers review questions prior to their use in assessments.
Centralised permissions, distributed tests and custom user roles
The principle behind the Inspera APIs is to provide open REST APIs that use Industry Standards where applicable, to enable bi-directional flow of data for all content types in Inspera Assessment. The APIs already have extensive support for test metadata, enrolment and contribution data, submissions, grading data, as well as user data. In 2019 we are continuing to expand on the existing APIs as well as adding new ones.
Three new question types: