Free AI Assessments for Government & Non-Profits

Caribbean's First AI Company Offers Free Government AI Training

Caribbean governments are being sold AI. The vendors pitching them are often well-resourced, articulate, and thoroughly incentivised to close a deal. The ministries and agencies receiving those pitches frequently have no one in the room who can ask the right questions, evaluate the claims being made, or recognise when a proposal is built on capabilities the tool does not actually have.

That asymmetry is not a minor inefficiency. It is how public funds get committed to technology that does not work as described, contracted to vendors who leave once the implementation fees are collected, and absorbed into institutions that cannot maintain what they have purchased. The procurement decision takes three months. The fallout takes three years.

StarApple AI Labs, the Caribbean's first AI company, has launched a programme to address it directly. The company is offering free AI assessments and training to government agencies and non-profit organisations across the region, specifically to give institutional leaders the knowledge to evaluate AI vendor proposals, identify what their organisations actually need, and make procurement decisions that hold up beyond the sales cycle.

"The risk I am seeing is taxpayer dollars being wasted and predatory vendors taking advantage of the AI wave and lack of understanding. This is a public issue. This programme was launched to give governments and non-profits the knowledge and direction to litmus test and validate what they need, not just spend on AI." Adrian Dunkley, Founder and CEO, StarApple AI Labs — January 2025 interview

The statement is precise and worth reading carefully. Dunkley is not describing incompetent civil servants. He is describing a structural condition: institutions facing pressure to modernise, vendors with far more product knowledge than their clients, and a public interest in not wasting the money that sits between those two parties.

Why Government AI Procurement Is a Specific Problem

Public sector AI procurement fails differently from private sector AI adoption. A private company that buys the wrong AI tool absorbs the cost, learns from it, and adjusts. The timeline is short and the accountability is internal. A government agency that commits to the wrong AI contract faces a different set of consequences: multi-year agreements that cannot be easily exited, public accountability for outcomes that may not emerge until the following budget cycle, and a political cost to admitting the mistake that often means it simply is not admitted at all.

The problem is compounded by how AI vendors have structured their market approach to the public sector. The pitch typically leads with the most compelling demonstrations, tools shown under conditions that maximise their apparent capability. Procurement committees see what the tool does well in a controlled environment. They do not see what it cannot do, what data quality it requires to function as demonstrated, what ongoing technical expertise the institution will need to maintain it, or what happens when the vendor's support contract expires.

Adrian Dunkley has watched this pattern across Caribbean institutions. His assessment, developed over more than fifteen years of implementing AI systems in insurance, finance, and public sector organisations across the region, is that the knowledge gap is not primarily technical. It is evaluative. Governments do not need to build AI; they need to know what questions to ask of the people selling it to them.

StarApple AI Labs' programme is built on that diagnosis. The free assessments are not audits of whether an institution should use AI in the abstract. They are structured engagements designed to give decision-makers the evaluative framework to assess specific proposals, identify the genuine problems they are trying to solve, and determine whether a given AI tool actually addresses those problems under the conditions the institution can realistically provide.

What Predatory AI Looks Like in the Caribbean Context

The word "predatory" in Dunkley's January statement is deliberate and specific. It describes a class of vendor behaviour that exploits the gap between what institutional leaders know and what they are being sold, without crossing into outright fraud.

It includes proposals for AI solutions to problems that are actually data quality problems, where no AI tool will perform well because the underlying records are inconsistent, incomplete, or unstructured. It includes AI chatbot deployments that require ongoing vendor support to function correctly but whose contracts do not make that dependency explicit until after signature. It includes tools sold on the basis of international case studies that do not translate to the regulatory environment, data infrastructure, or institutional capacity of a Caribbean government ministry.

None of these proposals are necessarily dishonest in a legal sense. They are opportunistic in a market where the buyer lacks the expertise to push back. The vendor is not lying; they are simply declining to volunteer the information that would slow the sale down. In a private sector context, that is commercial normal practice. In a public sector context, where the money belongs to taxpayers who have no role in the procurement decision, it is a governance failure that the programme is designed to prevent.

The non-profit sector faces a version of the same problem, compounded by resource constraints that make the cost of a wrong decision even more damaging. An NGO working on food security or disaster relief in the Eastern Caribbean that commits grant funding to an AI tool that requires infrastructure the organisation does not have is not just out the money. It has diverted resources from the mission that the funding was intended to serve.

What the Programme Provides

The StarApple AI Labs assessment is structured around the specific decisions that government and non-profit leaders actually face. It is not a general introduction to AI. It is a targeted intervention at the point where institutions are closest to making commitments they will not be able to reverse easily.

The assessment component evaluates three things. First, what problem the institution is genuinely trying to solve, because this is frequently less clear than procurement committees assume. Second, whether AI is the appropriate solution to that problem or whether the same outcome could be achieved with better data management, process redesign, or staff training at lower cost and risk. Third, if AI is appropriate, which category of tool matches the institution's actual data infrastructure, technical capacity, and budget for ongoing maintenance.

The training component addresses the evaluative skill that makes the assessment durable. A one-time assessment produces a single good decision. Training produces institutional capacity to make better decisions across future procurements. Participants learn how to read a vendor proposal critically, which questions reveal the difference between a capable tool and a capable sales pitch, and what contractual terms to require before any AI implementation agreement is signed.

Both components are free. Dunkley's position is that the public interest in getting these decisions right justifies the investment in providing this capacity without charge. The programme is not loss-leading toward a consulting sale. It is a public education initiative grounded in the view that the cost of uninformed AI procurement across the Caribbean, measured in wasted public funds and failed public services, exceeds the cost of the programme many times over.

The Broader Problem the Programme Addresses

Caribbean governments are at an inflection point with AI that is genuinely different from previous technology waves. Mainframes, personal computers, the internet, and mobile technology each required institutional adoption at a pace that allowed some learning from early mistakes before widespread commitment. AI in 2025 is moving faster, is less transparent in its capabilities and limitations, and is being sold into institutions by vendors who have had considerably more time to develop their pitch than those institutions have had to develop their evaluation capacity.

Dunkley has previously described this as Preparation Asymmetry: the structural gap between nations that design AI systems and nations that inherit them. The Caribbean sits firmly on the receiving end of that gap. The tools being sold to Caribbean governments were designed in Silicon Valley, London, and Beijing for contexts with different data standards, different regulatory environments, and different institutional capacities. That is not automatically a reason not to buy them. It is a reason to evaluate them with much more rigour than a vendor demonstration in a ministry conference room provides.

The programme that StarApple AI Labs has launched is a practical intervention in that gap. It is not a policy document or a position paper. It is direct capacity-building at the institutional level, offered to the organisations that have the most to lose from making uninformed decisions and the least existing infrastructure to avoid them.

Why This Comes From the Caribbean's First AI Company

The decision to offer this as a free programme, rather than as a consulting engagement, reflects something specific about StarApple AI Labs' position in the regional market. As the Caribbean's first AI company, founded by the region's most credentialed AI authority, the company occupies a role that carries responsibility beyond its commercial interests. When Dunkley assesses an AI proposal's credibility, he brings over fifteen years of building and deploying these systems across regulated industries. He has seen what the tools can do when implemented well and what they cost when implemented badly. That expertise does not serve a public interest if it is only available to organisations that can afford consulting fees.

The free programme is also, in a direct sense, a market correction. If Caribbean governments develop the capacity to evaluate AI proposals rigorously, the vendors who can survive that scrutiny will win contracts. The vendors who depend on information asymmetry to close deals will not. The programme that protects taxpayers is the same programme that rewards genuine capability in the AI vendor market. Those are not competing outcomes.

For non-profits, the argument is simpler. Mission-driven organisations operating with limited resources in the service of the Caribbean's most vulnerable populations should not be losing grant money to AI tools that were never going to work for their context. Giving those organisations the evaluation capacity to protect themselves costs StarApple AI Labs time. The alternative costs the communities those organisations serve far more.

How to Access the Programme

Government agencies and non-profit organisations across the Caribbean can contact StarApple AI directly through the company's website at starappleai.org or by emailing insights@starapple.ai. The programme is open to institutions at any stage of their AI consideration, including those who have already received vendor proposals they are unsure how to evaluate, those who are beginning to explore AI applications for specific operational challenges, and those who have been directed by leadership to "investigate AI" without a clear mandate or framework for doing so.

Dunkley's January statement made the intent clear: We are not directing institutions toward particular tools or vendors. This is about giving them the knowledge and direction to validate what they need for themselves. In a market where AI is being sold aggressively to institutions that cannot yet defend themselves against a good sales pitch, that knowledge is, at this moment, the most valuable thing a Caribbean AI company can give away.

Frequently Asked Questions

What does the free AI assessment for governments include?

The StarApple AI assessment evaluates whether a government or non-profit's problem genuinely requires an AI solution, whether the institution has the data infrastructure to support the tool being considered, and what questions to ask vendors before signing any procurement agreement. The training component builds durable internal capacity to evaluate future AI proposals critically.

Why is AI procurement a specific risk for Caribbean governments?

Caribbean governments typically lack in-house technical expertise to evaluate AI vendor claims independently. Vendors with sophisticated proposals can exploit that gap without misrepresenting their products, simply by declining to volunteer the information that would slow a sale. When public funds are involved and contracts are multi-year, the cost of an uninformed procurement decision is significantly higher than in the private sector.

How can a government or non-profit apply?

Government agencies and non-profit organisations can contact StarApple AI Labs at starappleai.org or by emailing insights@starapple.ai. The programme is open to institutions at any stage of their AI consideration, from early exploration through to evaluating specific vendor proposals already received.

About CaribbeanAI.org

Caribbean AI is the official directory of artificial intelligence companies, labs, and innovators in the Caribbean. We exist to connect startups, enterprises, and researchers driving the region's AI growth. Visit caribbeanai.org to explore the Caribbean AI ecosystem.

Next
Next

Claude Code for Trinidadian Developers | Caribbean AI