Test automatically gives 100% when started

vbeale
vbeale Community Member Posts: 1
I have some tests which I built and have been reusing for over a year now.  I republish them each time we need to test a new group of participants, just using a new name for the test which includes the date we are proctoring the tests.  I use the same file to create multiple versions, in the event someone doesn't pass the first time and wants to retest.

The last time we proctored the tests, two of the test 'versions' had a strange problem.  When the participant finished clicking thru the instructions and pressed the button to begin the test (they are timed tests) they were taken straight to the results page which said they had received 100%.  I tested this myself and it did it to me as well.  Different computers were involved for everyone.

Has anyone else ever experienced this issue?  Same test file published, just under different names as "V1", "V2", etc.  But it performed differently.

Has this ever happened to anyone?  Any ideas on how to make sure it doesn't happen again? (Other than rebuilding the files.  It took me over 80 hours to build the various tests and I do not have the time in my schedule right now to do this again.)

Using Lectora Inspire 17

Any help would be greatly appreciated!

Victoria Beale

Ohio LTAP Center / Ohio Department of Transportation

 

Comments

  • carlfink
    carlfink Community Member Posts: 968 ✭ Legend ✭
    Yes, I have seen this sporadically in both V16 and V17.

    I have no idea why this happens, but I have seen it.
  • zliquorman1276
    zliquorman1276 Community Member Posts: 67 ☆ Roadie ☆
    I've had this on most of the tests I've created for SCORM. I'm not positive, but I think that the variable for the assessment is defaulting to "100" instead of "0" in certain circumstances. You could test this by having a running test score display that begins even before the test, and see how/where it changes.

    If you want to get even more in-depth, you could add a little Javascript to report the variable value to the console and maybe pinpoint the cause more exactly if the above isn't sufficient.

    I never bothered with that due to the way I set up assessments which effectively resets the assessment/score anytime a user starts, ends, or exists an assessment (they're required to take it in one sitting), but if it's an issue for you, it may be worth trying.