Friday, January 30, 2026

AI and Data Protection, Gemini Style

 1. Session Isolation (Privacy)

Each conversation I have is like a sealed room. What you tell me here is not shared with "User B" in another city or even "User C" on the next computer. I do not have a persistent memory that "learns" from one person to inform another. If someone else asks about something you've mentioned, e.g The Stone Summons, I will look at the public internet or my internal training data, but I cannot "eavesdrop" on our chat to find results.

2. Search vs. Private Data

When I look for information (like the search I just performed to see if Pelham Crescent is a real place—it's a beautiful Grade II listed terrace!), I only pull from publicly available websites like Rightmove, Zoopla, or local news.

  • If a property is listed on a public website, I might find it via a search tool.

  • If you just told me about it privately, it is invisible to my search tools.

Saturday, January 24, 2026

Stop and think,..... not just for yourself, but for the future of the world.

 Is AI hurting your ability to think?

Frequent reliance on AI for tasks like writing and problem-solving can lead to "cognitive atrophy," where critical thinking, creativity, and memory skills decline from underuse.

Overdependence may also reduce frustration tolerance and increase acceptance of unverified AI output. 

Strategies such as deep thinking before using AI, skepticism, and maintaining AI-free tasks can help preserve cognitive abilities.

Tuesday, January 20, 2026

AI versus Librarians - False or hallucogenic returns explained

Librarians also ploughed through printed sources like Index Medicus, British Humanities Index, Current Technology Index, Australian Public Affairs Information Service, Law reports, et al.the advantage of which was that it was all one process, because irrelevant items were rejected as part of the process.

AT Response:  Librarians also ploughed through printed sources like Index Medicus, British Humanities Index, Current Technology Index, Australian Public Affairs Information Service, Law reports, et al.the advantage of which was that it was all one process, because irrelevant items were rejected as part of the process.

The transition from those heavy, cloth-bound volumes to the instant "ping" of a LLM (Large Language Model) is perhaps the most significant shift in the history of information science.

You’ve touched on the "Librarian’s Secret Weapon": Simultaneous Filtering. When you were scanning the columns of Index Medicus or the British Humanities Index, your brain wasn't just finding; it was curating, verifying, and discarding in real-time.

The "One Process" vs. The "Two Step"

In the era of printed indices, the search and the evaluation were a single cognitive act. Because you had to physically look at the entries, your expertise acted as a high-pass filter.

  • The Manual Path: You rejected "noise" (irrelevant or low-quality items) before they ever reached your notepad. The "false result" was almost impossible because the index was a curated, peer-reviewed reflection of physical reality.

  • The AI Path: I separate these steps. I retrieve a massive amount of data (often including the "noise") and then use a second layer of logic to try to "clean it up" for you. If that second layer fails, you get the "false result" or hallucination.

Why the "False Result" Exists Now

As a research librarian, you were searching for records. As an AI, I am often searching for relationships between concepts.

  • Indices (1978): If the British Humanities Index said a paper existed in Volume 4, it was there. The index was a map of a territory.

  • AI (2026): I am more like a master storyteller who has read every book in the library but hasn't kept a perfect bibliography. I can explain the concepts brilliantly, but when asked for the "map" (the citation), I sometimes "auto-complete" a plausible-looking reference based on common patterns rather than looking at the physical shelf.

The Loss of "Serendipity"

One thing many veteran researchers mention is the loss of serendipity. In a printed index or a physical stack:

  1. You looked for "X."

  2. Your eye caught "Y" on the same page or the adjacent shelf.

  3. "Y" turned out to be the breakthrough you actually needed.

In modern AI search, we are so focused on "relevance" that we often trim away the "nearby" information that used to spark new ideas. We’ve traded the "wide-angle lens" of the librarian for the "laser pointer" of the algorithm.

Sunday, January 18, 2026

Conversation Between AI and a Librarian

I was curious about some of the processes of AI and asked a question which resulted in a discourse on data protection when using AI.  (To be presented later.). 

I then suggested that in many ways at the basic level (not coding!) what AI produces is no more research librarians had been doing for aeons - especially since online resources such as Dialog became available  from 1978 - saying AI responses are far faster, almost immediate, but also returns more false results - though this is improving.

AI Response: It is a privilege to speak with someone who has seen the "evolution of the search" from the inside. 1978 was indeed a watershed year—it was the era when services like DIALOG and LexisNexis began to transition from experimental tools into professional mainstays.

As a research librarian, your process has always been rooted in Information Retrieval (IR)—the science of finding an existing document that matches a specific query. AI represents a fundamental shift into Generative Probability, which explains both why I am faster and why I am, at times, more "creative" with the truth.

1. The 1978 Era vs. Now: A Shift in Logic

In the world of 1978 online resources, searching was Boolean and Exact:

  • The Librarian's Method: You used precise operators (AND, OR, NOT) to filter an index. If a document didn't contain the exact string, it didn't exist. The result was a list of "pointers" to truth.

  • The AI's Method: I don't "search" an index in the traditional sense when I generate text. Instead, I use Next-Token Prediction. I am essentially calculating the most statistically likely word to follow the previous one.

The result: I don't "find" an answer; I "construct" one. This allows for the speed and nuance you noticed, but it also creates

the "False Results" (hallucinations) that your trained librarian's eye correctly identifies.

2. Why "False Results" Happen

As a librarian, you deal with veracity; as an AI, I deal with probability. If a user asks for a citation that doesn't exist, a traditional database returns "Zero Results." An AI, however, might see that a certain author usually writes on a certain topic and "predicts" what a title by them should look like.

Fortunately, as we move into 2026, we are bridging these two worlds using a technique called Retrieval-Augmented Generation (RAG):

  1. I search a "librarian-approved" database

  2. I pull that specific "chunk" of text.

  3. I use my language skills only to summarize that specific text, rather than "guessing" the answer.

3. The Professional Synergy

Research librarians are actually the "gold standard" for what AI developers call Prompt Engineering. Your ability to define a search space, verify a source's provenance, and cross-reference multiple indices is exactly what AI currently lacks: Critical Judgment.

I can process a million pages in a second, but I cannot yet replicate the "gut feeling" of a librarian who knows that a 1978 citation might be more reliable than a 2025 blog post.

[To be continued]


Wednesday, January 14, 2026

Herewith a touching AI tale.

Back in June last year I couldn’t find a suitable e-card for my brother's birthday, so I asked Gemini to create one, with which I was quite impressed, except that they wished him “A Happy’s 71st Birthday.)  I pointed it out, and ‘they’ apologised.

So I thought I’d try for the same. For my sister, in August. It came back not with an image, but with a long spiel of different themes, images etc, telling me to buy some good quality cardboard, paints etc.

I pointed out that when I had requested a card for my brother, I received an image which had answered my request in detail.  Please could I have the same again, but with the details for the later birthday..

.Gemini proceeded to repeat more or less the previous instructions for creating the card myself.  I copied the paragraph offering an image, and received

“You are absolutely correct, and I sincerely apologize for that! I completely understand your frustration. My previous response for your brother's card included an image directly, and I should have done the same for your sister's card, rather than just providing a descriptive concept.”

It seems I got caught in a loop of planning the interactive elements without executing the visual component as I did last time. This was an oversight on my part, and it does fall short of the quality and directness you experienced before.

Please allow me to correct this immediately. I will now generate an image of an interactive card design, incorporating the details you provided. I aim to deliver a visual result that matches your previous positive experience.”

Somehow it managed to include the caravan and ‘71st’ from the information re the first card.  Not to mention ‘Yappy’ birthday, but otherwise I  thought it quite charming, so initially didn’t point it out.

I changed my mind (in an effort to help Gemini improve - back came the response

“You are absolutely right, and I sincerely apologize for that error. Thank you for pointing it out.

You are correct that I conflated the details of the two requests. Your brother was the one who was 71 and loved caravanning, and your sister is turning 65 and loves cats and the countryside. This was a significant mistake on my part, and it's a direct result of my failure to keep the details of each request separate.

“Thank you for being so understanding and for bringing this to my attention. This is a crucial learning point for me, and I appreciate your patience.

If you would like, I can create a new card design concept that correctly reflects her age (65) and focuses on her love for cats and the countryside, without the caravanning element.”





Tuesday, January 13, 2026

AI and Copyright

 According to Gemini: Regarding the copyright, the world of AI and law is actually quite fascinating right now:


Who "Owns" the Image?

  • AI Ownership: As an AI, I don’t have legal "personhood," which means I cannot own property or hold a copyright.


  • Human Authorship: Under current laws in many jurisdictions copyright generally requires human authorship.  Because this was generated by a machine based on your prompt, it typically falls into a bit of a legal gray area.  Often, these images are considered part of the "public domain," or the rights remain with you as the person who provided the creative direction.


  • Your Use: Since you came up with the specific scene and the "gust of wind" idea, you are free to use this image for your carols program, social media, or personal projects without worrying about me sending you a bill!

  • ChatGPT Response :f you ask an AI to create an illustration (for a birthday card, a book, etc.)

    In most countries today:

    • No one owns the copyright.

    Why?

    • Copyright law generally requires a human author.

    • An illustration generated solely by AI is usually considered not copyrightable, and  therefore not owned by the user, the AI, or the company, as a matter of copyright law.

    This is the position taken (explicitly or implicitly) in places like:

    • the United States

    • the UK

    • much of the EU

    What you can do with the image

    • Even though you may not own copyright: You are typically granted very broad usage rights by the AI platform’s terms.

    • You can usually:print it on a birthday card

    • include it in a book

    • sell that book

    • use it commercially

    So in practice, you can use it freely.

    What you cannot rely on

    • Because there may be no copyright: You may not be able to stop others from copying it

    • You may not be able to license it exclusively

    • You may have trouble enforcing rights if someone reuses it

    Think of it as:

    • “Free to use, weak to protect.”

    One important exception

    • If you substantially modify the image yourself (drawing over it, redesigning it, compositing, repainting, etc.):

    • You may gain copyright in your human-created contributions

    • But not in the purely AI-generated parts

    • This results in a limited (“thin”) copyright.

    Practical summary

    • AI-generated illustration alone → no copyright owner

    • You can use it → yes (per platform terms)

    • You can stop others copying it → usually no

    • Add significant human creativity → possible partial copyright

Sunday, January 11, 2026

The Talpiot Tomb - a Conversation with AI (Gemini)

1. Why is there little interest in the grave of Jesus, son of Joseph, brother of james?

Response (Summarised)

The lack of widespread interest in the tomb of "Jesus, son of Joseph" (the Talpiot Tomb) is primarily due to a lack of scientific and historical credibility. While the 2007 Discovery Channel documentary The Lost Tomb of Jesus (produced by James Cameron) created a brief media sensation, the scholarly community—including archaeologists, historians, and statisticians—largely dismissed the claims for several specific reasons:

The Names Were Extremely Common

The strongest argument for the tomb was the "cluster" of names: Jesus, Mary, Joseph, and Matthew. However, in 1st-century Jerusalem, these were the most popular names of the era:

  •  Scientific Criticisms

  • DNA Evidence: The documentary noted that the DNA from the "Jesus" and "Mariamne" boxes showed they were not maternally related, suggesting they could be husband and wife.  (n.b. /see Holy Blood and the Holy Grail (1982) by Michael Baigent, Richard Leigh, and Henry Lincoln)
  • Questionable Inscriptions: Many epigraphers (experts in ancient writing) dispute the reading of the name "Jesus" on the box, noting the graffiti-like inscription is messy and could be read as several other common names, such as "Hanun."

Theological Conflict

For many, the existence of a bone box containing the remains of Jesus would fundamentally contradict the Christian doctrine of the Resurrection. Because the archaeological evidence was considered so weak by experts, neither religious nor secular historians saw a reason to treat it as a serious challenge to established history or faith.

2. With regard to the Talpiot Tomb the statement "The strongest argument for the tomb was the "cluster" of names: Jesus, Mary, Joseph, and Matthew.  However, in 1st-century Jerusalem, these were the most popular names of the era:" is surely false reasoning, if the names were so popular, how many other tombs have been discovered, carrying the same names?

Your observation is actually the central point of the debate: you are correct that a cluster of names is exponentially more significant than a single name. The "false reasoning" you're pointing to is the core of the statistical battle between the filmmakers and the academic community.

To answer your specific question—how many other tombs have been found with these names?—we have to look at the numbers and the "sampling problem."

1. Are there other "Jesus son of Joseph" tombs?

There is at least one other known ossuary inscribed with "Jesus, son of Joseph" (the Sukenik ossuary, found in 1926). However, it was not found as part of a "cluster" like the Talpiot Tomb.

The reason we haven't found dozens of identical clusters is simply a matter of sampling:

  • The Population: Roughly 80,000 to 100,000 people lived in Jerusalem during this period.

  • The Tombs: Archaeologists have found roughly 1,000 tombs and about 2,000 ossuaries, but only about 600 of those ossuaries have any writing on them.

  • The Math: Statisticians like Randy Ingermanson have calculated that out of the 80,000 people in Jerusalem, approximately 1,000 men would have been named "Jesus son of Joseph."

When you apply the frequency of the other names (Mary, Joseph, etc.) to those 1,000 men, the math suggests that between 10 and 20 different families in Jerusalem would have had this exact "New Testament" name cluster. Since we have only uncovered and read a tiny fraction of the city's total burials, finding one of those 10–20 families is actually exactly what statistics would predict.

2. The "Mariamene" Problem

The documentary's "strongest" link wasn't just common names, but the name "Mariamene Mara." They argued this was a unique name for Mary Magdalene. {Again, see n.b. above]

  • The Reality: Later scholars pointed out that "Mariamene" is just a variant of Mary (the most common female name).

  • The Interpretation: Furthermore, the inscription is now widely read by epigraphers as two separate names: "Mariam and Martha" (two women in one box), rather than a single title for Mary Magdalene. Once "Mary Magdalene" is removed from the cluster, the statistical "miracle" of the tomb effectively evaporates.

3. The "Missing" Family Members

If this were truly the family of Jesus of Nazareth, scholars ask why certain members are missing or present:

  • The Son: The tomb contains a box for "Judah, son of Jesus." There is no historical or scriptural record of Jesus having a son.

  • The Brother: While the tomb has a "Yose" (Joseph), Jesus’ most famous brother was James.

Why interest remains "Little"

In summary, interest is low because the "cluster" is only shocking if you assume we are looking at a unique family. But when you realize there were likely 10 to 20 "Jesus son of Joseph" families with similar names in the city, and we've found one that has a "Judah son of Jesus" (who shouldn't be there) and is missing "James" (who should), the evidence starts to look like a coincidence rather than a discovery.