In 2019 bracht de Dienst Justitiële Inrichtingen, in samenwerking met PBLQ, een casus voor onze studententeams om op te lossen. Bij deze casus moest er Artificial Intelligence bij DJI worden geïntroduceerd. De vraag was waar, waarvoor, hoe, en in samenwerking met wie? Het winnende team, bestaande uit Jacco van Berkel, Siddharth Jethwani en Daniella Pereira Barrios, presenteerde de volgende oplossing:

Using Artificial Intelligence in prison re-integration programmes

In the Netherlands, prisons have two essential roles: to protect society from the detainee and to prevent detainees from committing other crimes through re-integration programs. These programs are crucial throughout a detainee’s stay, as it creates objectives and facilitates tools for the future, which they can use for their successful return to society. Here, several persons are involved in the care of the inmate, their education, and their evaluations throughout the process. We believe that the human interaction that a detainee gets is essential to their successful integration into society, as they are confronted with human interaction as soon as they get out of prison. For this reason, if we wish to introduce Artificial Intelligence (AI) into the processes that take place inside of prisons it should not be used in a way that replaces this human contact, because this would reduce humanity within the prisons. In this case, a priority should be to maintain the current view that prison staff has of detainees, namely: ‘[they] do not see people as numbers, [they] see them as people’ (DJI). Thus, if AI is introduced to the prison system it should operate out of the sight of prisoners. Furthermore, AI would need to have a mere advisory role when it comes to the fate of the prisoners, in order to leave room for discretion among the prison staff, especially the mentors and case managers (Lipsky 1980: 3). If AI would have full control of decision-making within a prison, detainees might feel like they are receiving non-personal treatment. 

AI laced cameras 

At the detention center, one of the objectives is to minimize irregular behavior in detainees so that they are able to reintegrate with society upon being freed once again. Irregular behavior refers to the way in which detainees conduct themselves when opposed to the average way average behavior of the people they are to reintegrate with. Cameras can be laced with artificial intelligence to detect irregularities in behavior. While this would have previously been handled solely by the staffers at the detention center, using artificial intelligence can help minimize the extent to which street-level bureaucrats’ discretionary power is influenced by psychological bias and subjectivity. This can help enhance the prison’s ability in predicting detainee’s behavior by minimizing the negative effects of human psychology. 

The AI-laced cameras should be able to collect data of the detainee and have an advisory power to the staffer instead of a deciding power based on his/her performances. Based on the data gathered by these AI-tailored cameras, recommendation-based enforcement can be applied whereby the actions and decisions made by mentors and those in contact with the detainees are justified by the advice of the cameras. 

When thinking of these irregularities, they can be “good” irregular behavior or “bad” irregular behavior. Good irregular behavior among detainees can be observed in them through the amount of time they spend reading or the extent to which they communicate well with each other. A consequence of increased good “irregular” behavior can be that detainees are allowed to spend more time doing activities by the staffers of the detention center; whereas bad irregular behavior such as suicide attempts or violent behavior can be seen as threats to the progress towards re-integration and staffers can be advised to increasingly pay attention to these detainees. 

Another important thing to think of when using AI in this or any sense of the detention center is the extent to which the data can be safeguarded against invasion by external parties. Seeing the complex nature of algorithms, and how simple it would be for an external party to manipulate it for personal gain - think of the case of Cambridge Analytica and Facebook (Cadwalladr and Graham Harrison 2018). To safeguard the privacy of the detainees, strict regulations should be put in place for those developing the algorithm - internally or externally - to ensure that the data is only accessible to the staffers and stakeholders that the DJI determines it would be relevant for and should definitely not be used maliciously or for any personal reason/gain. 

AI literacy should also be promoted amongst the staffers to ensure that they are fluent in the process of how it produces advice. If the advice is not taken, justification should be provided; however, based on the presentation, it is understood that due to the human nature of being lazy, many opt to abide by the advice so they do not have to give justification for not taking it. For this reason, the justification process should be incentivized for staffers as well.

Enhancing network communication between chain partners 

During the presentation of DJI, the importance of communication between chain partners has been stressed. As the different partners, all possess different parts of information about the detainees, sharing this data can enhance DJI’s ability to predict certain behavior. For example parole, local authorities, and health care all have different experiences with the detainee, but all information together provides a fuller picture of who the detainee actually is. Therefore it is of vital importance that through the detention and reintegration period chain partners continuously share information and create a so-called ‘information network’. These networks continuously share the information which results in high commitment and eventually interdependence as knowledge-sharing becomes an integral part of the parties’ communication (Powell 1990: 300). 

We believe that AI can play a crucial role in optimizing this transferring of knowledge between network partners. We suggest that a digital ID program should be set up which connects information of the partners. This data can then be used to predict risk behavior through AI algorithms. Think hereby of predicting recidivists, violent conduct, or suicidal attempts. These predictions should then be used to inform the staff at the detention center and actors that are affiliated with the detainee’s reintegration program. 

The danger of such an online ID program is that sensitive information about detainees could in the worst case be leaked or hacked. Therefore it is essential that the program is optimally secured. A comparable system is already existing within healthcare; medical dossiers. These documents are held strictly private and only reviewable by patients and their caretakers (Rijksoverheid 2019). As such a system already exists and is tolerated by our population, a similar system for prisons should not be a problem. 

Bibliography

Cadwalladr, C. and Graham-Harrison, E. (2018). Revealed: 50 million Facebook profiles harvested for Cambridge Analytica in major data breach. The guardian, 17: 22.

Lipsky, M. (1980). Street-level bureaucracy: dilemmas of the individual in public services.

New York: Russell Sage Foundation.

Rijksoverheid (2019). “Wie mag mijn dossier zien?”,

https://www.rijksoverheid.nl/onderwerpen/rechten-van-patient-en-privacy/uw-medisch-dossier/wie-mag-mijn-medisch-dossier-inzien. Consulted on 29 November 2019.

Powell, W. (1990). “Neither market nor hierarchy: Network Forms of Organization”, Researchin Organizational Behavior, 12: 295-336.

nl_NLDutch