Skip to content

To provide instrumentation of services from a user perspective, this API provides the foundation for collecting anonymous feedback data.

License

Notifications You must be signed in to change notification settings

diggsweden/UserFeedbackAPI

Repository files navigation

Logo

UserFeedbackAPI

License: MIT

API Documentation

Java

Spring boot

Build system

Databas Cache

Authentication

REUSE status


Description

To provide instrumentation of services from a user perspective, this API provides the foundation for collecting anonymous feedback data. The API is designed to be used by a client-side JavaScript drop-in library InclusionToolbox which is driver for the UserFeedbackApp, this is the first tool, responsible for collecting the rating data and sending it to the API.


About

In the spring of 2023, the extension of the Covid project determined that to combat the challenge of digital inclusion and to provide a more citizen-centric approach to the development of digital services, 14 directives was issued. One of these directives was data-driven development. Since resources like time and funding is limited the perspective of measure first, then improve was adopted. To provide a tool for this, the UserFeedbackAPI was introduced to collect anonymous rating data from users of digital services.

Note: From the get-go, the UserFeedbackAPI, was planned to be hosted centrally by a government agency and in so doing also enable some synergy with the directive of central governance. However, due to the fact that the UserFeedbackAPI isn't adopted yet by a government agency, the hosting model for the UserFeedbackAPI is open. Initially, had the UserFeedbackAPI been centrally hosted the general idea was that the UserFeedbackAPI would be a shared service, only requiring the agencies to drop-in the JavaScript library and configure it to their needs. This is still the case, but now the agencies either need to host the UserFeedbackAPI themselves or use a third-party hosting service.


Contents

Index

API Documentation

The API is documented using OpenAPI 3 and the documentation can be generated by running the server and then accessing the /swagger endpoint.

To review the API documentation without running the server, copy the contents of the file below and visit the website Swagger UI editor and paste it into the editors left-hand side.


How the UserFeedbackAPI works

The general idea (simplified) for the interaction with this initial version of the UserFeedbackAPI is as follows:

  1. An organisation register by using the /organisation/register endpoint Protected endpoint.
    And from the response, the organisation will receive an apiKey for use in subsequent requests.

    Note: Depending on hosting, then the registration step could be facilitated by a backoffice or a website to simplify the distribution of API keys.

  2. The Javascript drop-in module (when configured, initialized and loaded) calls the /impression/create endpoint Public endpoint.

    Note: This will send an impression and create a context (if none exists) from where the feedback component was loaded. The response from this request contains an id (impression) which is used in rating requests.

  3. When the feedback module receives a rating choice from the user, then the Javascript module calls the /rating/create endpoint Public endpoint.

    Note: This will send a rating and contribute to the rating statistics for that specific context (part of a user-journey).

Getting started

Building

gradle build

Running

gradle bootRun --args='--spring.profiles.active=dev'

Configuration

As with all services, they need to be configured to work in a specific environment. The UserFeedbackAPI is no different. The configuration for the Spring boot application server is done using the application.yml file and the application-<profile>.yml files.

As for configuring the stand-alone parts, then please review the k8 deployment scripts for an idea of how to configure them in production. There is a keycloak-compose.yml file that can be used in conjunction with docker to start a keycloak server locally. There is also a script for starting a local redis instance: start_redis_server.sh.

Reference architecture

This project is a Spring Boot application and has been developed using the Java 17 corretto SDK.

When the project was test run it was deployed on AWS EKS and supported by an AWS RDS Aurora postgresql 15 cluster provisioned like this.

For more details pertaining to provisioning please review the deployment scripts in the k8 folder. In the k8 folder you will find scripts that includes the use of certificates where the certificate used in the reference architecture was an open certificate issued by AWS Certificate service.

The redis implementation is written as a write-aside cache and had an eviction policy of Least-Recently-Used (LRU).


Stand-alone parts

PostgresSQL instance - stand-alone, verified to work with version 12 and 15.

Redis instance - stand-alone, verified to work with version 7.2.1.

Keycloak instance - stand-alone, verified to work with version 22.0.5.

See diagram: architecture diagram, for an overview of the reference architecture. The diagram is created using draw.io.


Testing

As for testing, the project has a gradle task named testEndToEnd.

tasks.register('testEndToEnd', Test) {
    include '**/**EndToEndTestSuite*'
    useJUnitPlatform()
}

The end-to-end tests requires the necessary parts of the infrastructure to run locally. The redis instance and postgres instance is required for the spring profile local. The local profile will not use the keycloak instance.

gradle testEndToEnd -Pargs='--spring.profiles.active=local'


Authentication

The UserFeedbackAPI uses OAuth 2.0 for authentication and is currently configured to use a keycloak stand-alone server as provider for production to secure all sensitive endpoints of the API, see the security config.

Please see the Keycloak documentation for details on how to configure a keycloak stand-alone server.

Note: The current setup assumes that the keycloak server uses a DB to store the users and realm configurations.

The API can be run with authentication (for sensitive endpoints) by using the dev or prod spring profiles. For the dev profile it's assumed that the keycloak server uses a separate realm, one for a keycloak server that runs either in a development cluster or on localhost. Following that, for the prod profile, it's assumed that the keycloak server uses another realm. The reason for this is that the keycloak servers realm configuration needs to be able to route the authentication requests upon completion to the correct client application.


Statistical model

Concept: Sliding window

The rating.expirationTimeInSeconds configuration property is used to determine the time-window for which the rating average is calculated. The time-window is a sliding window that is based on current time and current time subtracted with the expiration time, creating an interval in which the average is calculated. All ratings recoded for each context within this interval contributes to the average.

The impression.expirationTimeInSeconds configuration property is used to determine the time-window for which the user's impression ~ rating pair is still valid and no more ratings is allowed for the user and context. Issuing more impressions during this time will return the already recorded impression and subsequent rating responses so that no individual user's contribution to the rating average skews the statistics.

For how to configure the behaviour please see the application.yml file and the application-<profile>.yml files.

Note: The idea behind collecting an impression before a rating is primarily to enable the analysis to include the usefulness of collecting feedback at a specific context. Say hypothetically that a single context is presented (sends impressions) but rarely receives ratings from users, then this lack of user engagement might indicate that this context (location in a user-journey) is less effective.


# Rating
rating.expirationTimeInSeconds=10

# Impression
impression.expirationTimeInSeconds=10


Note: The rating response returns the average rating for the context and the number of ratings that contributed to the average, in this way the observer can determine how actionable a rating average is.


The future

Currently, the platform has the capability to collect rating data and that's just about it. The vision though is for the service to be able to do more:

  1. Archiving / Data-minimization - there are a couple of strategies here depending on the use-case for the collected data.

    A. The data is archived (or aggregated) in a separate database and the data is only used for e.g. long-term statistical purposes.

    B. The data is removed after a certain period of time (e.g. 6 months) and the data is only used for evaluating short-term, changes to the service.

    C. A combination of A & B.

    Note: There are naturally alternatives to these proposals depending on data-retention policies and other proposed use-cases. If the data-model is expanded it may be suitable to choose different strategies for individual data-sets.

  2. Data analysis - the data can be analysed by presenting the data in a consumable way.

    A. The data is presented in a dashboard-like view with searching and sorting depending on organisation, metadata and contextual data.

    B. The data is ingested using an external service like e.g. PowerBI or an equivalent service.

    Note: Potential solutions differs greatly and depends on if the platform receives e.g. its own backoffice or if the service's sole purpose where to collect data. It's open for ideas and proposals for how to visualize the statistics in an effective way.

  3. Data export / Sharing - the data can be exported or shared.

    The initial idea was for the data to be shared with the service owner (or organisation) that is the source of the data. Since the point of collecting the user feedback is for the organisations developers to act on trends to effectively better the service for the users. This idea stems from the belief that the service is centrally hosted, if this is not the case then access to the data is enabled and this functionality may be redundant.

    Note: Depending on if data is pushed by an internal process (or job) or fetched (e.g. long-polling) you could possibly use this mechanics for archiving purposes (see point 1) or for ingestion to external services (see point 2).

  4. Expansion - this project could be expanded to a launch pad to support further tooling. e.g. To save a users preferred settings for the Support modal. By storing the user configured modal settings coupled with a code (e.g. XYZ-123) the settings could anonymously be shared between the users devices without the need for account / login management.


Copyright © 2021-2023, Myndigheten för digital förvaltning - Swedish Agency for Digital Government (DIGG). Licensed under the MIT license.

About

To provide instrumentation of services from a user perspective, this API provides the foundation for collecting anonymous feedback data.

Resources

License

Code of conduct

Security policy

Stars

Watchers

Forks

Contributors 4

  •  
  •  
  •  
  •  

Languages