AI is already in your organisation, now define what it is allowed to do

by Digital Hub Cyprus

Source: cyprus-mail

In a previous article, I argued that the most overlooked AI risk for Cyprus firms is not that machines will “go rogue.”

The risk is simpler.

AI executes the authority that already exists in your systems, and much of that authority was never designed with machines in mind.

So the next question is practical: if AI is already participating in your daily work, what is it actually allowed to see and do?

Consider a common scenario. A Nicosia-based professional services firm connects an AI assistant to its document platform to speed up drafting and summarisation. Employees love it and adopt it quickly. Within a week, it becomes the default way to search, extract clauses, and draft client responses. Then a client asks a due diligence question: “Exactly what does your AI have access to?” The firm cannot answer confidently, because the AI runs under a shared account that sees far more than the assistant’s job requires.

Nothing has malfunctioned. The system is doing precisely what it was authorised to do. This is machine authority.

For years, organisations built access around people. Employees join, receive credentials, and are granted permissions that match their role. When they leave, access is revoked. The model is imperfect, but the principle is clear: authority flows through identifiable humans.

AI breaks that assumption.

It is a non-human actor working inside workflows with credentials, permissions, and reach. If you treat it like a feature, it will quietly inherit whatever authority is available. If you treat it like an actor, you can govern it.

A workable approach for most Cyprus firms is not complicated. It requires three disciplined changes.

First, give AI its own identity.

Do not run AI systems under senior employees’ accounts or broad service accounts created for old integration projects. Create purpose-built machine identities for each AI system, tied to a defined business function. That gives you accountability, you can see exactly what the system accessed, and control, because its access can be suspended without affecting human users.

Second, restrict permissions to the minimum necessary.

Most AI systems do not need enterprise-wide visibility. A customer-service summariser does not need finance folders. A legal drafting assistant does not need marketing files. Narrowing access reduces the blast radius of mistakes and misconfiguration.

In practice, the rule is simple: AI should only access explicitly approved datasets, folders, or matters. If the system’s role is “active client matters,” it should not be able to read closed files. If its role is “summarise inbound queries,” it should not be able to search the entire document library.

Third, make authority visible and revocable.

AI operates at machine speed. That means you need two capabilities many firms still treat as optional: audit trails and a kill switch.

If you cannot reconstruct what an AI system accessed and why, you cannot credibly explain your risk position to a board, an auditor, an insurer, or a major client. And if you cannot suspend access immediately when something looks wrong, you do not control authority. You merely hope it behaves.

This is not only good practice. It aligns with where regulation is heading.

Cyprus has now transposed NIS2 into national law, placing explicit expectations on management bodies to approve cybersecurity risk-management measures, oversee implementation, and face liability for failures. DORA has been applicable since January 2025 for financial entities in scope, raising the bar on governance, incident management, resilience testing, and third-party technology risk. And the EU AI Act is moving parts of AI governance into enforceable obligations, including transparency and traceability for certain systems.

Local supervision is pointing in the same direction. CySEC has highlighted weaknesses in incident reporting and stressed the need for a well-documented ICT risk-management framework, independent oversight, and regular audits.

For Cyprus, the implications extend beyond regulation. They are commercial.

The island’s economy depends on cross-border trust: investment services, payments, professional services, fintech, and digital-asset businesses serving international clients. In these markets, due diligence is increasingly specific. Clients want to know not only whether you use AI, but how you control it.

That is where the real competitive divide forms. Not between firms that adopt AI and those that do not, but between firms that can explain, in plain language, what their AI is allowed to see and do, and firms that cannot.

AI is already operating inside many organisations.

The only remaining question is whether it arrived as an ungoverned guest or as a controlled participant.

The firms that define machine authority now will not only reduce risk. They will move faster, answer harder questions with confidence, and turn AI governance into a trust advantage.

Because in the age of machine actors, the most important governance question is no longer what your employees are allowed to do. It is what authority your software is already executing.

*Petros Nearchou is a director at a US-based Enterprise Cybersecurity & IAM firm

You may also like