Terraphim AI - what is the difference?

The difference with our AI assistant approach and unique points for Terraphim AI:

  1. Privacy first: Instead of moving data, we codify and move knowledge graphs, which allows us to build fast graph embeddings that can be deployed even into the browser via wasm. We allow users to codify their knowledge and move knowledge (graph embeddings) next to their data, not need to move data across boundaries

  2. Action-oriented ontologies: focus on what you need to do. If you think about self-driving cars, it need to recognise many objects but only do five things: go forward, left, right, reverse and stop. If you start with what you need to do next, the solution search space can be designed very small, resulting in a compact knowledge graph.

  3. Role-based “lenses” to your data: we recognise that a single individual can have multiple roles and contexts for knowledge-based work and actions, allowing separate knowledge graphs per role. We allow users to control and modify their knowledge graphs.

Our approach removes a lot of complexity from AI assistants, fostering good old YAGNI.

In technology projects, a misunderstanding happens between business and engineers, and the project or product manager comes into play and needs to “translate” the discussion between the two worlds.
PM wears several hats in such discussions and needs to switch them frequently, playing different roles and understanding different perspectives quickly. If you are such a PM, Terraphim will help you train your language in each domain and assist you in “translations.” Think of it like an interview or negotiating prep assistant that drives you towards the success of essential communications.

[[Knowledge management]]: going from the 1st generation to the 2nd

  • Introduction

    • This is a classic PMP certification question: We have a problem; how many projects will it take to fix it? The correct answer is at least two: one for identifying a problem and finding a type of solution, and the second for implementing a solution and ensuring the problem is fixed.
    • In other words, we have two contracts and two project life cycles that are unified under a single system life cycle. Life cycles are how we think about progress in system development and deployment, and they are what we charge for and value. Each stage has its own vocabulary of terms and types of decisions to be made—design decisions made during the early stages or logistic support and training organisation for later.
    • What is the value of knowledge management in this practical context? I will use the conceptualisation of decision patterns offered by John Fitch to answer the question.
    • I want to show that:
      • There is an apparent causal relation between knowledge gain and SE [[Value]].
      • The value of SE effort is expressed in evident, action-centric and decision-oriented [[Measurement]] of the [[Project’s progress]].
      • [[Knowledge gain]] strengthen [[Causality]] and increasing chances of [[Success]] of the [[Project]] and that’s a justification of why [[Knowledge management]] is an essential part of SE efforts.
  • Problem and Research question

    • No CEO wants to pay for knowledge management because it has no explicit value. If you’re going to gain knowledge, you can do it at your own expense, which has a significant limit. The problem is that there is a need for more explanation between knowledge gain and delivered value. Let’s make the problem and the context more formal because we want to automate and apply machine learning (specifically, [[Causal inference]] and [[Decision trees]]) as much as possible.
    • What is a [[Causality]] model between a piece of knowledge, a [[Concept]], and the [[Value]] of system engineering activities? Or, how can a [[Knowledge gain]] be billable and reportable?
  • Context and definitions

    • We often say that we move from a black box to a transparent or white box while designing, building, and commissioning a system. We explain that with the stage-gate model of a [[Life cycle of a system]]. Let’s define this model in causal inference terms as a causal graph. An example of such conceptualisation is this The 4D paradigm of WBS (researchgate.net).
    • [[Program stage]] at its [[Decision-making point (project gate)]] is a stable, self-sustainable, non-degrading [[Situation]] of building a [[System]], which exists only together and sequenced in the order, explained by both [[Causality]] of design and build [[Decision]] and architecture [[Intent]]. Under these assumptions, we can model it as a node in a [[Causal graph]] and start explaining how knowledge contributes to the success of a project.
    • [[Success]] is a succession, the inheritance of knowledge from previous stages to the next, especially considering people’s turnover between stages of massive programs.
    • Secondary research question:
      • How can we learn what project stage we are in if we don’t have the prescribed chart on the wall?
      • We can only reverse engineer the decisions and learn from them. These are natural boundaries of stages.
  • Solution

    • Fitch, John. “Reverse Engineering Stakeholder Decisions from Their Requirements,” no. June 2022 (2022). https://issuu.com/ppisyen/docs/ppi_syen_113_june_2022.
    • image.png
    • We can build a decision causal graph for each development and box those decisions into stages, with gates being stable situations. In these, we can stop and postpone the project to be restarted later once we agree on a further contract.
    • Knowledge supports decision-making for invariants of such graphs ([[Token-causality]] and type-causality).
    • However, such knowledge should be executable and linked to the types/classes of decisions it supports, it must be formally modelled knowledge, not just documented knowledge. Otherwise, ML frameworks cannot process them.
    • Examples: neo4j, Logseq/Obsidian graphs, Linked Data, and Atomic Data, which are federated data with types and interfaces mapped to action-centric taxonomies. MBSE-enabled knowledge bases.
  • Benefits

    • Practical on-the-job training.
    • Actionable presentations (for example, role-specific RAG and Q&A).
    • A new type of textbooks, mapped to lifecycle models.

Causal revolution - the first move

Currently, engineering companies do not exploit one knowledge asset enough: their process models and non-machine-readable diagrams (system architectures, functional designs, or root cause analysis charts). Those assets require a competent subject matter expert to interpret them, and if she doesn’t remember an appropriate paper-based model at the time of analysis, that will be a wasted opportunity. That happens more often than the opposite.
The causal revolution that has been happening for the last seven or so years has the potential to change that. Each paper-based model, like a process model or system architecture diagram, is translated into the causal inference model and thus becomes a digital asset directly embeddable into a search or recommendation engine. So, your search results become relevant but also actionable and interpretable. We started to build a use case to show how you go from such flip-chart models to causal inference; stay tuned for updates.

Mapping the knowledge coastline

If we analyse what verbs are related to knowledge, they will be grouped into the Knowledge management life cycle model.
Acquiring
gain, acquire, seek, obtain, learn
Using
apply, use, utilise
Sharing
share, disseminate, transmit, impart, transfer
Understanding
understand, manage, organise
Producing
generate, develop

Those actions map to the life cycle model: creation, capture, organisation, sharing, utilisation, evaluation, preservation, and disposal.

If we further analyse those verbs, we conclude that knowledge is something we keep in some place and can fetch from it. It is also helpful and can be combined and disassembled. If we analyse it, we find that the usefulness of most knowledge depends on the situation. That means it’s not like a physical thing, the boundaries and granularity of knowledge is situational. That means, unlike physical things, knowledge structures are not rigid. Modern note-taking tools like Obsidian, Roam Research, or Logseq support the down-top structuring of concepts and records.

The situatedness of knowledge also generates the necessity to manage synonyms that suit both the broader concept and the narrower term describing the specifics of the situation for which we define the value of the knowledge.

What we don’t have is an approach to navigating such semantic spaces. Without global coordinates to help us, we can only follow the knowledge coastline.