From a summer workshop
to the defining technology of our era
The story of artificial intelligence begins in the summer of 1956, when a small group of mathematicians and scientists gathered at Dartmouth College with an audacious idea: that every aspect of human intelligence could, in principle, be described precisely enough for a machine to simulate it.
What followed were decades of alternating excitement and disappointment — periods of breathless optimism, then harsh winters when funding dried up and promises went unfulfilled. Early programs could play chess and solve algebra problems, but the real world proved stubbornly more complex than any formula.
The founding era. Researchers build the first neural networks and symbolic reasoning systems, convinced that human-level intelligence is just around the corner.
Expert systems briefly thrive in industry, then collapse under their own rigidity. The field splinters, then quietly rebuilds — this time grounded in statistics and data.
Deep learning emerges. Powered by vast datasets and GPU computing, neural networks begin beating humans at image recognition, translation, and game-playing — tasks once thought impossible for machines.
Large language models arrive. AI moves from a specialist tool to a general-purpose capability, woven quietly into the fabric of everyday life.
AI is not a distant technology —
it is already your daily companion
Most people encounter artificial intelligence dozens of times before breakfast without realising it. The route your maps app chooses, the face that unlocks your phone, the email that gets quietly filtered to spam, the music that somehow knows your mood on a Tuesday morning — all of these are AI at work, invisibly and continuously.
When you ask a voice assistant a question, translate a foreign menu with your camera, or receive a surprisingly accurate product recommendation, you are benefiting from decades of research compressed into a fraction of a second of computation.
More recently, conversational AI has made this relationship explicit. For the first time, ordinary people can hold an open-ended dialogue with a machine — asking it to explain a medical term in plain language, help draft a difficult email, or talk through a decision at midnight when no one else is available. The technology is imperfect, but its utility is real and growing.
Understanding AI — even at a broad level — matters because it shapes what we ask of it, what we trust it with, and what boundaries we set. That is precisely why we research it.
From a whitepaper to
a new architecture of trust
Blockchain did not emerge from a university lab or a corporate R&D programme. It arrived in October 2008 as a nine-page document posted anonymously online, authored by someone — or some group — known only as Satoshi Nakamoto. The paper described Bitcoin: a way for two people to exchange value over the internet without needing a bank, a government, or any trusted third party in between.
The elegant idea at its core was a shared ledger — a chain of records, each mathematically linked to the one before it, maintained simultaneously by thousands of computers around the world. To alter any entry, you would need to redo the work of every record that followed it, across every copy of the chain at once. Tampering becomes computationally infeasible. Trust becomes structural rather than institutional.
The Bitcoin whitepaper is published. The first block — the "genesis block" — is mined in January 2009, embedding a newspaper headline about bank bailouts as a quiet statement of intent.
Ethereum is proposed by a 19-year-old developer. It extends blockchain beyond currency into programmable contracts — self-executing agreements written in code, with no lawyers required.
A speculative frenzy sweeps through crypto markets. Beneath the noise, enterprises quietly begin exploring private blockchains for supply chains, trade finance, and identity verification.
Decentralised finance, digital ownership, and central bank digital currencies push blockchain into mainstream policy debates. The technology matures — slowly, unevenly, but irreversibly.
Blockchain is reshaping
how we prove, own, and transact
For most people, blockchain remains abstract — associated with volatile cryptocurrencies and speculative headlines. But the underlying technology is quietly solving a much older problem: how do you establish trust between strangers without relying on an institution that could fail, overcharge, or exclude you?
When you send money internationally today, it passes through a chain of correspondent banks, takes several days, and loses a percentage to fees. A blockchain-based transfer can settle in seconds, across borders, at a fraction of the cost. For the 1.4 billion adults worldwide without a bank account, this is not a convenience — it is access.
Closer to home, the same principles apply to proving who you are, what you own, and what you agreed to. Digital credentials on a blockchain cannot be forged. Smart contracts execute automatically when conditions are met — no intermediary, no delay, no dispute about whether the terms were honoured. Property records, voting systems, medical histories, creative royalties — each is being reimagined through this lens.
None of this is frictionless yet. Scalability, energy use, and regulation remain genuine challenges. But the direction is clear, and the implications are broad enough to warrant serious, sustained research.
Japina — researching artificial intelligence and blockchain,
so you don't have to take them on faith.