Legal issues hamper application of artificial intelligence

11 november 2019 | Algorithms must contribute to more efficient production processes, but also entail new legal issues. YP Your Partner explores the legal hooks and eyes of AI.

 Data flow essential to bring AI to the level of automated processes

Manufacturing companies, R&D departments and software and sensor technology suppliers are all exploring the possibilities of artificial intelligence (AI). Algorithms will lead to greater efficiency in production processes, better quality, lower costs and greater safety. However, algorithms also raise a number of new legal issues that require extra vigilance when closing a deal.

YP Your Partner in Drachten is an example of a company whose main activity is collecting, combining and processing sensor data and other data. The company manages data flows in its own software platform C.A.R.S, allowing remote monitoring, operation and automation of machines, equipment and processes. Currently YP Your Partner is working on the transition from C.A.R.S 7 to C.A.R.S 8, a program that includes AI. ,,If a threshold value is exceeded in the current situation, this generates an alert. This has to become smarter: you do not only want to receive an alert when the pressure in the boiler is too high, but you also want context details, such as installation date, serial and type number, etc. When you have these details, you can plan maintenance more efficiently,” says Director Theun Prins.

Conversion of analog data
According to Prins, YP Your Partner has been measuring data flows successfully for thirty years and now has the tools to determine the context. The biggest challenge is to get the system filled. ,,Right now a lot of information about installations is only available in PDF and cannot be used digitally. If you want to use analog data, it must be converted into data suitable for algorithms.” Prins points to BIM (Building Information Model), a model used in the building industry. All parties involved in a construction project enter their data regarding installations, materials etc. in this system. ,,We also need such a system in industry. So far, we only use a small piece of our knowledge to compare installations and have them respond to each other. This involves compact algorithms and not complex systems. There is not yet a business case for the installed base, which is a precondition for meaningful application of AI.”

Decision support
The additional value of AI is clear: faster and more efficient processes at lower costs, with predictive maintenance as the magic term. ,,Many companies are still in the phase of static and manual reports instead of interactive reports. We want to move towards fully predictive automatic systems. These systems require algorithms that generate decision-supporting information based on concrete queries to systems,” says Prins. ,,For instance, if a pump fails three times in a row, the software advises to replace. Instead of just signaling the failure, the software gives advice and automatically orders a new pump as soon as the costs of the failures exceed the cost price of the pump. We humans still have to develop a sense for this.”

Chain of parties
But there is more. If you rely on algorithms, you have to consider legal consequences, says Prins. ,,Suppose you have implemented such an algorithm and it worked well ten times. Ten times the system orders a new pump, but the eleventh time the system issues a large order for several pumps. Who is responsible for this error: the company that developed the algorithm or the company using it? How do you deal with such a situation?” Involving a chain of parties, ranging from the owner of the machine, the supplier of the intelligent software, the developer of the algorithm to suppliers of hardware and sensors can complicate the application of AI, confirms André Kamps, independent ICT and privacy lawyer at Kamps Juridisch Advies. ,,When YP has thought up an algorithm and integrated it into an application, customers may sometimes use it in entirely different processes. It is therefore important to define the application of the algorithm in the user agreement very clearly. For other applications, separate agreements should be made to prevent unpleasant surprises,” says Kamps.

Conflicts
Besides disagreement about liability, lack of written agreements may also lead to conflicts about things like IP and revenues. Prins illustrates this with a practical example. ,,Suppose we developed an algorithm for a machine builder in a certain domain and another partner discovers that it can also be used in another domain. The question then is who owns the IP and how do you divide the revenues. So far, we have always managed to find a solution. If a user earns a lot through this clever deal, we demand a certain share of the profit upfront, but we have not yet found a proper legal solution for this. We adapt our contract based on new insights.” YP has defined three types of partnerships for the usage and retail rights of its software. ,,This lays down what a customer can and may do with our software. We now have to make the same arrangements for AI.”

Appropriate arrangements
The need for appropriate arrangements will be just as big for algorithms, says Kamps. ,,How do you deal with confidential data? If a party wants to terminate the agreement, can it still use the data? What about the rights if a party chooses to continue with a competitor? Continuity is also an important aspect: what if the party fails to meet its obligations or goes bankrupt. Is the system easy to replace? Has an exit strategy been agreed? These are issues you should investigate in advance and should include in your revenue model, where possible.” Other legal aspects that require attention include the processing of personal data, for instance productivity measurement data of operators, and the pricing in the chain.

Growing unpredictability

Now that the software systems of companies are frequently linked, the use of algorithms becomes less predictable. Algorithms become more and more complex, smarter and self- learning, their output is increasingly harder to predict. ,,This makes it harder to make agreements on liability. The Dutch Civil Code attributes loss to the owner of the product. Whether this also applies to self-learning algorithms remains to be seen,” states Kamps. ,,If a company chooses to use algorithms, you have to consider liability and take out an insurance for this liability. Insurance companies have little experience with algorithms that were interpreted or used incorrectly and will probably be reluctant to issue a policy. Perhaps in future, besides taking out an additional cyber policy, you will have an information expert assess your policy before acceptance in an insurance policy.”

Connecting sources
In the run-up to more intensive use of AI, Prins wants to take a number of steps with YP. ,,To begin with, we want to connect all information sources, improve virtualization of physical reality and finally we aim for extensive integration of humans, which is made easier by mobile phones and interconnectivity,” says Prins. ,,People have to enter data and feedback on causes in a disciplined way, as this is essential for a root cause analysis. This data flow is also necessary to raise AI to the level that allows automated processes. Full automation, from data to action.”

 

The original article in Dutch was published in the October 2019 edition of Link Magazine.

 

 

 

« Back to overview

Newsletter

Innovationcluster Drachten uses cookies.
Innovationcluster Drachten uses cookies to analyse and improve the site and for social media purposes. By continuing to use this website or by pressing the accept-button you accept the use of cookies on the websites of Innovationcluster Drachten.