SynaptiFlux: a toy neuron synapse model

Description

This project is an experiment to abstract a neuron/synapse model into a collection of modular components. This is more of a theoretical exercise rather than something with a more practical ambition. It is however hoped that other researchers can contribute further ideas, and code, to this project. It would be great to have more interesting neural modules rather than just the current sequence and print symbol modules. This neuron model is intended to be more complete than traditional artificial neural networks (ANN’s) which are based around layers of matrix multiplications.

Motivation

This project borrows some of the ideas from my older project the Semantic DB, in particular in relation to the idea of if-then machines. Here is a sample machine

pattern |node: 1: 1> => sp_1
pattern |node: 1: 2> => sp_2
...
pattern |node: 1: n> => sp_n
then |node: 1: *> => |some output ket>

where sp_n are superpositions that represent different patterns which if matched similarly enough, triggers the if-then machine, and invokes the then rule for that machine.

Issues

This project is very new (less than 2 weeks old), and the current author is not yet sure where to take the project from here. Also, the documentation is currently very minimalist, and the structure of the project is quite abstract, making it somewhat difficult for other researchers to contribute. The back end code is also verbose in comparison to what it actually achieves. Perhaps this could be improved upon. One completely missing component is the ability for the system to learn by itself – a key feature of biological neurons.

Wish list

Any contributions in the form of ideas for the future direction of the project, or practical code contributions, would be greatly appreciated. Ditto any help with documentation. For example, the current author would like to see some non-trivial synapse functions, perhaps capable of representing the idea of long-term-potentiation (LTP). This would require synapses having some kind of long term memory of the activation history of a given synapse. It is not yet clear how difficult this would be to add to the project, but possibly as simple as a python dictionary for each synapse.

Project resources

A README and all the associated code is available on my SynaptiFlux github.

1 Like

Research Update

One of the things listed as missing in the previous post was the complete lack of auto-learning. Well, with quite a bit of thought, and code, I have a basic version working. The idea is based on the old line fire together, wire together, which we are going to call associative sequence memory.

So, at designated times, take a snapshot of the currently active synapses, check if they activate an existing neuron, and if not, then store that pattern into a new neuron/synapse pair. Then later down the track we can prune neurons that are below some given activation threshold.

What does designated times mean? Well, in the general case, I don’t know, but in the current case it is triggered by punctuation characters. We have two relevant neuron flags that implement this: init store buffer flag and store buffer flag. Where, the init flag clears the buffer, and starts a delay counter. The store flag uses this delay counter to only consider synapses that have fired in the last few time steps. Then it checks if the currently active synapses already trigger a neuron, if so, do nothing. If not, then wire in a new neuron. By taking delay into account, this system easily learns sequences. Eg, in the below example, words separated by punctuation characters.

To help with this, we have some required pieces:

  • activation count each time a neuron is activated, increment its activation count.
  • prune removes neurons and their corresponding synapses if they have been activated less than some threshold. Ie, irrelevant neurons.
  • layers to keep the active synapses pattern clean, we use the idea of layers. The initial neurons are in layer 0, then the neurons that take their input are in layer 1, and so on for higher layers. In the example below we only consider active synapse patterns for layer 1, and store the resulting neurons and synapses in layer 2.
  • auto delay synapses instead of hand-wiring in synapses for all the different delays, we use auto delay based on the synapse name. Eg: some synapse S0 D3 is taken to be the synapse some synapse S0 with a delay of 3.
  • active synapses returns a list of active synapses at the current time step, and in the given layer, with a max delay count.
  • test pattern tests if the given pattern invokes a known neuron, usually used in combination with active synapses

example of active synapses

For example, the word “Hello” has the following active synapses, for layer 1, and a max delay of 4

['print capital h S0 D4', 'print lower e S0 D3', 'print lower l S0 D1', 'print lower l S0 D2', 'print lower o S0 D0']

test script

Written with the help of ChatGPT I have a bash script test_scripts.sh that runs through all my testing python scripts, checks if any of them raise exceptions, and then stores the results in a date and timed log directory. We had to take this approach because the script outputs were too large to wire in to traditional unit-testing approaches (that I know of). A further benefit is I can choose a date/time and check the output of any of my scripts at that time.

ToDo

  • fix the test for existing neuron code. I’m not yet sure why it is broken.
  • implement layers of these auto sequence learning. So layer 2 would be individual words, made up of characters, and layer 3 would be phrases, made up of sequences of learned words.
  • generalize the idea of punctuation characters to other sequence types. Ie, when is a good time to invoke the store buffer flag?
  • come up with examples other than learning characters into words
  • come up with ideas for other auto-learning mechanisms
  • update the project README

Project GitHub

The full code is available on my GitHub.


Here is auto-learning in action

In this example we feed in the sequence “Hello, Hello!” to our system. This sequence was chosen so that the first “Hello” would store a pattern into a neuron/synapse, and the second “Hello” would trigger that neuron, but without storing its own pattern. This only half worked, because our test neuron code is currently broken, and hence the code stored the pattern twice (N0 and N2). Note that N1 is an empty pattern triggered by the sequence of punctuation characters ", ".

We note that the output of this system is

stored sequence: Hello
Hello, Hello!

The first line is the output from our new neuron N0 (triggered by the second occurrence of the “Hello” string), and the second line is the output from our full input sequence “Hello, Hello!”.

Here is the full output of our testing script

$ python3 testing_system_print_sequence.py
Let's implement 'Hello World!' in a print sequence system:
To store:
    name: N0
    delay: 5
    layer: 1    ['print capital h S0 D4', 'print lower e S0 D3', 'print lower l S0 D1', 'print lower l S0 D2', 'print lower o S0 D0']
    neurons: []
    buffer: Hello
Will store a neuron and synapse!

Neuron: N0
    layer: 2
    activation count: 0
    pooling: <function pooling_or at 0x7febc73afea0>
    params: {}
    patterns: 1
        0    trigger: <function trigger_dot_product_threshold at 0x7febc73afe18>
        0    params: {'threshold': 5}
        0    [1, 1, 1, 1, 1]
        0    ['print capital h S0 D4', 'print lower e S0 D3', 'print lower l S0 D1', 'print lower l S0 D2', 'print lower o S0 D0']

    axon: []

Synapse: N0 S0
    source axon: N0
    source layer: 2
    type: <function synapse_delayed_identity at 0x7febc73af730>
    params: {'sign': 1, 'delay': 0}
    action: <function action_println at 0x7febc7358158>
    action params: {'s': 'stored sequence: Hello'}
    spike history: []

To store:
    name: N1
    delay: 0
    layer: 1    []
    neurons: []
    buffer:
Empty pattern!

To store:
    name: N2
    delay: 5
    layer: 1    ['print capital h S0 D4', 'print lower e S0 D3', 'print lower l S0 D1', 'print lower l S0 D2', 'print lower o S0 D0']
    neurons: []
    buffer: Hello
Will store a neuron and synapse!

Neuron: N2
    layer: 2
    activation count: 0
    pooling: <function pooling_or at 0x7febc73afea0>
    params: {}
    patterns: 1
        0    trigger: <function trigger_dot_product_threshold at 0x7febc73afe18>
        0    params: {'threshold': 5}
        0    [1, 1, 1, 1, 1]
        0    ['print capital h S0 D4', 'print lower e S0 D3', 'print lower l S0 D1', 'print lower l S0 D2', 'print lower o S0 D0']

    axon: []

Synapse: N2 S0
    source axon: N2
    source layer: 2
    type: <function synapse_delayed_identity at 0x7febc73af730>
    params: {'sign': 1, 'delay': 0}
    action: <function action_println at 0x7febc7358158>
    action params: {'s': 'stored sequence: Hello'}
    spike history: []

stored sequence: Hello
Hello, Hello!
Neural System: example sequence system
--------------------------------------

Sources:
    #OFF#: 0 <generator object source_off at 0x7febc73b2678>
    #ON#: 1 <generator object source_on at 0x7febc73b26d0>
    #INIT#: 0 <generator object source_init at 0x7febc73b2728>
    #ALT-1#: 1 <generator object source_alt_N at 0x7febc73b2780>

Channels: ['!seq-0!', '!seq-1!', '!seq-10!', '!seq-11!', '!seq-12!', '!seq-13!', '!seq-14!', '!seq-2!', '!seq-3!', '!seq-4!', '!seq-5!', '!seq-6!', '!seq-7!', '!seq-8!', '!seq-9!', '#ALT-1#', '#INIT#', '#OFF#']

Modules:
Module: sequence module
    delay counter: 20

    inputs:
        #INIT# -> "init flag"   history: [1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
        #ALT-1# -> "carry flag"   history: [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]
        #OFF# -> "off flag"   history: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]

    outputs:
        "0 neuron S0 delta" -> !seq-0!    history: [0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
        "1 neuron S0 delta" -> !seq-1!    history: [0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
        "2 neuron S0 delta" -> !seq-2!    history: [0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
        "3 neuron S0 delta" -> !seq-3!    history: [0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
        "4 neuron S0 delta" -> !seq-4!    history: [0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
        "5 neuron S0 delta" -> !seq-5!    history: [0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
        "6 neuron S0 delta" -> !seq-6!    history: [0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
        "7 neuron S0 delta" -> !seq-7!    history: [0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
        "8 neuron S0 delta" -> !seq-8!    history: [0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
        "9 neuron S0 delta" -> !seq-9!    history: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0]
        "10 neuron S0 delta" -> !seq-10!    history: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0]
        "11 neuron S0 delta" -> !seq-11!    history: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0]
        "12 neuron S0 delta" -> !seq-12!    history: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0]
        "13 neuron S0 delta" -> !seq-13!    history: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0]
        "14 neuron S0 delta" -> !seq-14!    history: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0]

Module: print symbols module
    delay counter: 4

    inputs:
        !seq-0! -> "init store buffer flag"   history: [0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
        !seq-0! -> "use capitals flag"   history: [0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
        !seq-0! -> "print h"   history: [0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
        !seq-1! -> "print e"   history: [0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
        !seq-2! -> "print l"   history: [0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
        !seq-3! -> "print l"   history: [0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
        !seq-4! -> "print o"   history: [0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
        !seq-5! -> "print ,"   history: [0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
        !seq-5! -> "store buffer flag"   history: [0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
        !seq-6! -> "init store buffer flag"   history: [0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
        !seq-6! -> "print  "   history: [0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
        !seq-6! -> "store buffer flag"   history: [0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
        !seq-7! -> "init store buffer flag"   history: [0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
        !seq-7! -> "use capitals flag"   history: [0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
        !seq-7! -> "print h"   history: [0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
        !seq-8! -> "print e"   history: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0]
        !seq-9! -> "print l"   history: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0]
        !seq-10! -> "print l"   history: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0]
        !seq-11! -> "print o"   history: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0]
        !seq-12! -> "print !"   history: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0]
        !seq-12! -> "store buffer flag"   history: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0]
        !seq-13! -> "init store buffer flag"   history: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0]
        !seq-13! -> "flush buffer flag"   history: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0]

    outputs:

I fixed the testing for existing neuron code. The bug was because we were not detecting the auto delay synapses correctly.

Here is the new output of the testing python script, noting in particular

To store:
    name: N2
    delay: 5
    layer: 1    ['print capital h S0 D4', 'print lower e S0 D3', 'print lower l S0 D1', 'print lower l S0 D2', 'print lower o S0 D0']
    neurons: ['N0']
    buffer: Hello
That pattern already triggers a neuron.

which is exactly what we wanted. Here is the full output of that script:

$ python3 testing_system_print_sequence.py
Let's implement 'Hello World!' in a print sequence system:
To store:
    name: N0
    delay: 5
    layer: 1    ['print capital h S0 D4', 'print lower e S0 D3', 'print lower l S0 D1', 'print lower l S0 D2', 'print lower o S0 D0']
    neurons: []
    buffer: Hello
Will store a neuron and synapse!

Neuron: N0
    layer: 2
    activation count: 0
    pooling: <function pooling_or at 0x7f97fb5d0e18>
    params: {}
    patterns: 1
        0    trigger: <function trigger_dot_product_threshold at 0x7f97fb5d0d90>
        0    params: {'threshold': 5}
        0    [1, 1, 1, 1, 1]
        0    ['print capital h S0 D4', 'print lower e S0 D3', 'print lower l S0 D1', 'print lower l S0 D2', 'print lower o S0 D0']

    axon: []

Synapse: N0 S0
    source axon: N0
    source layer: 2
    type: <function synapse_delayed_identity at 0x7f97fb5d06a8>
    params: {'sign': 1, 'delay': 0}
    action: <function action_println at 0x7f97fb5d70d0>
    action params: {'s': 'stored sequence: Hello'}
    spike history: []

To store:
    name: N1
    delay: 0
    layer: 1    []
    neurons: []
    buffer:
Empty pattern!

To store:
    name: N2
    delay: 5
    layer: 1    ['print capital h S0 D4', 'print lower e S0 D3', 'print lower l S0 D1', 'print lower l S0 D2', 'print lower o S0 D0']
    neurons: ['N0']
    buffer: Hello
That pattern already triggers a neuron.

stored sequence: Hello
Hello, Hello!
Neural System: example sequence system
--------------------------------------

Sources:
    #OFF#: 0 <generator object source_off at 0x7f97fb5d2678>
    #ON#: 1 <generator object source_on at 0x7f97fb5d26d0>
    #INIT#: 0 <generator object source_init at 0x7f97fb5d2728>
    #ALT-1#: 1 <generator object source_alt_N at 0x7f97fb5d2780>

Channels: ['!seq-0!', '!seq-1!', '!seq-10!', '!seq-11!', '!seq-12!', '!seq-13!', '!seq-14!', '!seq-2!', '!seq-3!', '!seq-4!', '!seq-5!', '!seq-6!', '!seq-7!', '!seq-8!', '!seq-9!', '#ALT-1#', '#INIT#', '#OFF#']

Modules:
Module: sequence module
    delay counter: 20

    inputs:
        #INIT# -> "init flag"   history: [1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
        #ALT-1# -> "carry flag"   history: [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]
        #OFF# -> "off flag"   history: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]

    outputs:
        "0 neuron S0 delta" -> !seq-0!    history: [0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
        "1 neuron S0 delta" -> !seq-1!    history: [0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
        "2 neuron S0 delta" -> !seq-2!    history: [0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
        "3 neuron S0 delta" -> !seq-3!    history: [0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
        "4 neuron S0 delta" -> !seq-4!    history: [0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
        "5 neuron S0 delta" -> !seq-5!    history: [0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
        "6 neuron S0 delta" -> !seq-6!    history: [0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
        "7 neuron S0 delta" -> !seq-7!    history: [0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
        "8 neuron S0 delta" -> !seq-8!    history: [0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
        "9 neuron S0 delta" -> !seq-9!    history: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0]
        "10 neuron S0 delta" -> !seq-10!    history: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0]
        "11 neuron S0 delta" -> !seq-11!    history: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0]
        "12 neuron S0 delta" -> !seq-12!    history: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0]
        "13 neuron S0 delta" -> !seq-13!    history: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0]
        "14 neuron S0 delta" -> !seq-14!    history: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0]

Module: print symbols module
    delay counter: 4

    inputs:
        !seq-0! -> "init store buffer flag"   history: [0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
        !seq-0! -> "use capitals flag"   history: [0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
        !seq-0! -> "print h"   history: [0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
        !seq-1! -> "print e"   history: [0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
        !seq-2! -> "print l"   history: [0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
        !seq-3! -> "print l"   history: [0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
        !seq-4! -> "print o"   history: [0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
        !seq-5! -> "print ,"   history: [0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
        !seq-5! -> "store buffer flag"   history: [0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
        !seq-6! -> "init store buffer flag"   history: [0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
        !seq-6! -> "print  "   history: [0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
        !seq-6! -> "store buffer flag"   history: [0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
        !seq-7! -> "init store buffer flag"   history: [0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
        !seq-7! -> "use capitals flag"   history: [0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
        !seq-7! -> "print h"   history: [0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
        !seq-8! -> "print e"   history: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0]
        !seq-9! -> "print l"   history: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0]
        !seq-10! -> "print l"   history: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0]
        !seq-11! -> "print o"   history: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0]
        !seq-12! -> "print !"   history: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0]
        !seq-12! -> "store buffer flag"   history: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0]
        !seq-13! -> "init store buffer flag"   history: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0]
        !seq-13! -> "flush buffer flag"   history: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0]

    outputs:

1 Like

@GarryM
I would suggest a couple things to make things clearer.

  1. Draw a visual diagram of how your components connect given a particular code description.

  2. Draw another diagram for how the active parts work.

  3. Describe the syntax that you are using, so we can interpret what your code and outputs mean.

For instance, maybe you can pair a multi-layer code description with its visual diagram.

You should also go step-by-step for what each of the below characters mean:

@jacobeverist Apologies for not being clear enough, I am quite terrible at explaining my ideas!

Perhaps this rough sketch will go some way towards explaining the pieces?
synaptiflux-neuron-sketch-v2

And for context, here is what we get when we print() out a sample neuron:

Neuron: first neuron
    layer: 0
    activation count: 0
    pooling: <function pooling_or at 0x7fd7e5fb0e18>
    params: {}
    patterns: 3
        0    trigger: <function trigger_dot_product_threshold at 0x7fd7e5fb0d08>
        0    params: {'threshold': 1}
        0    [0, 1, 2, 3]
        0    ['alpha S0', 'beta S0', 'gamma S0', 'delta S0']

        1    trigger: <function trigger_dot_product_threshold at 0x7fd7e5fb0d08>
        1    params: {'threshold': 1}
        1    [7, 7, 7]
        1    ['X S0', 'Y S0', 'Z S0']

        2    trigger: <function trigger_dot_product_threshold at 0x7fd7e5fb0d08>
        2    params: {'threshold': 1}
        2    [3, 5, 2]
        2    ['u S0', 'v S0', 'w S0']

    axon: [0, 1, 0, 0, 0, 1, 0]

In this particular case, our first neuron has 3 pattern detectors.

  • each pattern has its own stored pattern, eg, [0,1,2,3] for pattern 0.
  • each pattern has a list of synapses to read a corresponding value from ['alpha S0', 'beta S0', 'gamma S0', 'delta S0'].
  • once the list of synapses is converted into a list of float values, the two lists are fed to the specified trigger function, with the given parameters, for that pattern. In the above case, the params specify an activation threshold of 1.
  • the output of each trigger function, which is a single float per trigger function, is then fed to the specified pooling function. In the above case the pooling function is a logical or. The intention is that more than 1 distinct pattern can trigger the given neuron, or if the pooling function is xor, or sum mod2, they can inhibit each other.
  • the output of the pooling function is then appended to the axon history, by convention that value is either 0 or 1.

The last component in our sketch are the synapses, where in general there are multiple synapses per axon. Here is an example synapse:

Synapse: alpha S0
    source axon: alpha
    source layer: -1
    type: <function synapse_identity at 0x7fbd7cc3f598>
    params: {'sign': 1}
    action: <function action_null at 0x7fbd7cc3fbf8>
    action params: {}
    spike history: [0, 5, 8, 3, 0, 1]
  • each synapse is wired to a particular input neural axon, here alpha.
  • the specified synapse function is then applied to the given axon history
  • the output of the synapse function then optionally invokes an action, such as printing a given string, but potentially more interesting actions too! Eg, my auto-learning example above.
  • the output of the synapse is then fed into the inputs of neurons

The intention is for the trigger, pooling, synapse and action functions to be modular and general, giving this neural model some flexibility in what it can encode.

Now, let’s try to explain an if-then machine, as heavily used by SDB3.1 and SDB4 GUI. They are intended to be an abstract model of a single neuron, and strongly influenced the design of the SynaptiFlux project. Indeed, I’m currently working on an automatic mapping from if-then machines to the SynaptiFlux model, which should simplify the construction of SynaptiFlux neurons. Though it should be noted that the SynaptiFlux model is more general than if-then machines, and the time component adds further power that is missing from if-then machines.

Anyway, the above are so called learn rules in the SDB notation, also known as semantic triples, which take the general form

operator |some ket> => some-superposition

But in the particular case of if-then machines we have some conventions:

  • node: 1 is the name of our neuron, but in theory can be any string
  • pattern operator specifies that this is a pattern to be matched against
  • |node: 1: 3> corresponds to the third pattern that can activate the given neuron
  • sp_k is short-hand for the k’th superposition. Eg, 2 |alpha> + 3 |beta> + 5 |gamma>.
  • then operator specifies the output ket for when the given neuron activates. In the SDB language, it is also possible for the then rule to execute a specified action.
  • |node: 1: *> means the then rule applies to all matching patterns with the neural name node: 1, as indicated by the : * suffix on the ket name.

In the SDB language, neurons are matched against and activated by the following code snippet

then drop-below[0.98] similar-input[pattern] some-input-pattern

where some-input-pattern is compared to each stored pattern sp_k using a similarity measure, and then if above 98% similarity, invoke the then rule for that neuron.

In a later post I will give a mapping from SDB superpositions and sequences to their SynaptiFlux equivalent.

Aside from the above GitHub links I also have a corresponding SDB related website with more info, though it has not seen much love in a while now. Ditto an old documentation page.

Research Update

So, most recently I have been working on parsing simple SDB style code into the SynaptiFlux interface. The two main components are

  • parsing kets, superpositions and sequences into lists of coeffs and labels
  • parsing if-then machines into back-end SynaptiFlux.

The first of these was relatively easy. Just note that Sn corresponds to the n’th synapse, and Dn corresponds to a delay of value n (which is particularly important in sequences). Here is the output from our ket/sp/seq testing script, where in each example we have the input ket/sp/seq, and then the corresponding parsed coeff and synapse lists. I guess part of the point is that there is basically a 1-1 mapping back and forth from the two notation styles, which is good!

$ python3 testing_simple_sdb_parsing.py
Quick test of parsing simple SDB:
Pattern:
    input:   |>
    coeffs: []
    synapses: []

Pattern:
    input:  |alpha>
    coeffs: [1]
    synapses: ['alpha S0 D0']

Pattern:
    input:   2|apples>
    coeffs: [2.0]
    synapses: ['apples S0 D0']

Pattern:
    input:  3 |oranges>
    coeffs: [3.0]
    synapses: ['oranges S0 D0']

Pattern:
    input:   |alpha> + |beta> + |gamma>
    coeffs: [1, 1, 1]
    synapses: ['alpha S0 D0', 'beta S0 D0', 'gamma S0 D0']

Pattern:
    input:   2|a> + 3  |b> + 5|c> +  7 |d>
    coeffs: [2.0, 3.0, 5.0, 7.0]
    synapses: ['a S0 D0', 'b S0 D0', 'c S0 D0', 'd S0 D0']

Pattern:
    input:  |a> . |> . |> . |> . |b> . |> . |c>
    coeffs: [1, 1, 1]
    synapses: ['a S0 D6', 'b S0 D2', 'c S0 D0']

Pattern:
    input:  |H> . |e> . |l> . |l> . |o>
    coeffs: [1, 1, 1, 1, 1]
    synapses: ['H S0 D4', 'e S0 D3', 'l S0 D2', 'l S0 D1', 'o S0 D0']

Pattern:
    input:  2|a> . 3 |b> . 5 |c> . 7|d>
    coeffs: [2.0, 3.0, 5.0, 7.0]
    synapses: ['a S0 D3', 'b S0 D2', 'c S0 D1', 'd S0 D0']

Pattern:
    input:  2|alpha> + 3|beta> + 5|gamma> . |a> + |b> . 7|x> + 11 |y> + 13 |z>
    coeffs: [2.0, 3.0, 5.0, 1, 1, 7.0, 11.0, 13.0]
    synapses: ['alpha S0 D2', 'beta S0 D2', 'gamma S0 D2', 'a S0 D1', 'b S0 D1', 'x S0 D0', 'y S0 D0', 'z S0 D0']

Pattern:
    input:  2|alpha> + 3|beta> + 5|gamma> . |> . |> . |> . |a> + |b> . |> . |> . 7|x> + 11 |y> + 13 |z>
    coeffs: [2.0, 3.0, 5.0, 1, 1, 7.0, 11.0, 13.0]
    synapses: ['alpha S0 D7', 'beta S0 D7', 'gamma S0 D7', 'a S0 D3', 'b S0 D3', 'x S0 D0', 'y S0 D0', 'z S0 D0']

With that in place, the next step was to auto-map SDB if-then machines into SynaptiFlux. The point of if-then machines is to abstract out the details of a neuron model into a compact/terse representation. Given my background, I like compact/terse :slight_smile:. We start with the traditional style machine, in this case we define a greetings machine:

    -- our greetings if-then machine:
    pattern |node: 1: 1> => |H> . |e> . |l> . |l> . |o>
    pattern |node: 1: 2> => |H> . |e> . |y>
    pattern |node: 1: 3> => |H> . |i>
    pattern |node: 1: 4> => |G> . |'> . |d> . |a> . |y>
    pattern |node: 1: 5> => |M> . |o> . |r> . |n> . |i> . |n> . |g>
    then |node: 1: *> => |Greetings!>

If any of these patterns match a given input, say using the SDB code then drop-below[0.98] similar-input[pattern] input-seq, then we conclude we have seen a greeting. However, because SynaptiFlux is more general than this model of multiple patterns + one output, we have updated our syntax to the slightly cleaner chunk notation. Noting that almost all rules inside our chunks are optional, so you only define the ones that you need, and can leave out the rest. We include all the possible rule types here for demonstration purposes.

First we define some default functions and parameters. If a given neuron or synapse chunk does not define these, their values are set to these default values:

    -- define our default functions and parameters using chunk notation:
    as default:
        layer => |1>
        trigger_fn => |trigger: simm> + |threshold: 0.98>
        pooling_fn => |pooling: or>
        synapse_fn => |synapse: delayed_identity> + |sign: 1> + |delay: 2>
        action_fn => |action: println> + |s: some unspecified string>
    end:

Where we have a function that parses superpositions into a python dictionary:

def parse_sp_to_dict(s, cast_values=False):
    """Parse a superposition containing |key: value> pairs into a python dictionary."""
    sp_dict = {}
    for label in parse_sp(s, synapse_number=None)[-1]:
        try:
            key, value = label.split(': ', 1)
        except:
            continue
        if cast_values:
            value = cast_value(value)
        sp_dict[key] = value
    return sp_dict

And for each of trigger, pooling, synapse and action functions, we have corresponding maps that map function names to the functions themselves. Eg, the trigger function map:

trigger_fn_map = {
    'dot_product': trigger_dot_product_threshold,
    'simm': trigger_list_simm_threshold
}

Next, we define a neuron. Something to note here is that our two patterns have been given different trigger functions/parameters, which we can see in the neural module output below, if we look close enough. In this case, the trigger functions are the same, just with a different trigger threshold. Noting the trigger rule is required to precede the pattern rule it applies to:

    -- define our if-then machine:
    as neuron |neuron 1>:
        layer => |3>
        pooling_fn => |pooling: sum_mod2>
        trigger_fn => |trigger: simm> + |threshold: 0.75>
        pattern => 2|alpha> + 3|beta>
        trigger_fn => |trigger: simm> + |threshold: 0.5>
        pattern => 5|x> + 7|y> + 11|z>
    end:

And finally a synapse, with the axon rule being mandatory, but the synapse_fn and action_fn can be left out, in which case they will be filled in with default values:

    -- define a synapse:
    as synapse |neuron 1 S0 not>:
        axon => |neuron 1>
        synapse_fn => |synapse: delayed_not> + |sign: 1> + |delay: 0>
        action_fn => |action: println> + |s: synapse activated by not of neuron 1>
    end:

After parsing, we have generated the following neural module:

Neural Module: SynaptiFlux if then machine
------------------------------------------

Delay counter: 0

Default functions and parameters:
    layer: 1

    trigger: <function trigger_list_simm_threshold at 0x7f76f71b7268>
    params: {'threshold': 0.98}

    pooling: <function pooling_or at 0x7f76f71b72f0>
    params: {}

    synapse: <function synapse_delayed_identity at 0x7f76f71b1a60>
    params: {'sign': 1, 'delay': 2}

    action: <function action_println at 0x7f76f71b7510>
    params: {'s': 'some unspecified string'}

Sources:

Neurons:
    poked neurons: set()

Neuron: neuron 1
    layer: 3
    activation count: 0
    pooling: <function pooling_sum_mod2 at 0x7f76f71b7488>
    params: {}
    patterns: 2
        0    trigger: <function trigger_list_simm_threshold at 0x7f76f71b7268>
        0    params: {'threshold': 0.75}
        0    [2.0, 3.0]
        0    ['alpha S0 D0', 'beta S0 D0']

        1    trigger: <function trigger_list_simm_threshold at 0x7f76f71b7268>
        1    params: {'threshold': 0.5}
        1    [5.0, 7.0, 11.0]
        1    ['x S0 D0', 'y S0 D0', 'z S0 D0']

    axon: []

Synapses:
Synapse: neuron 1 S0 not
    source axon: neuron 1
    source layer: 3
    type: <function synapse_delayed_not at 0x7f76f71b1ae8>
    params: {'sign': 1, 'delay': 0}
    action: <function action_println at 0x7f76f71b7510>
    action params: {'s': 'synapse activated by not of neuron 1'}
    spike history: []


Neurons:
    layer: 3    ['neuron 1']

Synapses:
    layer: 3    ['neuron 1 S0 not']

Hopefully the above will simplify defining neural modules going forward.

ToDo

Maybe implement the inverse, ie, code to map a neural module back to our new chunk notation? The two representations are meant to be one to one, so it should be do-able.


Update

I now have the inverse working. So given this initial if-then machine:

    -- define our default functions and parameters using chunk notation:
    as default:
        layer => |1>
        trigger_fn => |trigger: simm> + |threshold: 0.98>
        pooling_fn => |pooling: or>
        synapse_fn => |synapse: delayed_identity> + |sign: 1> + |delay: 2>
        action_fn => |action: println> + |s: some unspecified string>
    end:


    -- define our if-then machine:
    as neuron |neuron 1>:
        layer => |3>
        pooling_fn => |pooling: sum_mod2>
        trigger_fn => |trigger: simm> + |threshold: 0.75>
        pattern => 2|alpha> + 3|beta>
        trigger_fn => |trigger: simm> + |threshold: 0.5>
        pattern => 5|x> + 7|y> + 11|z>
    end:

    -- define a synapse:
    as synapse |neuron 1 S0 not>:
        axon => |neuron 1>
        synapse_fn => |synapse: delayed_not> + |sign: 1> + |delay: 0>
        action_fn => |action: println> + |s: synapse activated by not of neuron 1>
    end:

Maps to a neural module and then NM.as_chunk() outputs:

as default:
    layer => |1>
    trigger_fn => |trigger: simm> + |threshold: 0.98>
    pooling_fn => |pooling: or>
    synapse_fn => |synapse: delayed_identity> + |sign: 1> + |delay: 2>
    action_fn => |action: println> + |s: some unspecified string>
end:

as neuron |neuron 1>:
    layer => |3>
    pooling => |pooling: sum_mod2>
    trigger_fn => |trigger: simm> + |threshold: 0.75>
    pattern => 2.0|alpha S0 D0> + 3.0|beta S0 D0>
    trigger_fn => |trigger: simm> + |threshold: 0.5>
    pattern => 5.0|x S0 D0> + 7.0|y S0 D0> + 11.0|z S0 D0>
end:

as synapse |neuron 1 S0 not>:
    axon => |neuron 1>
    synapse_fn => |synapse: delayed_not> + |sign: 1> + |delay: 0>
    action_fn => |action: println> + |s: synapse activated by not of neuron 1>
end:

Update 2

Today I made some small improvements to my as_chunks() methods. Now the neurons are output sorted by their layer number, and synapses are grouped with their corresponding neurons, instead of dumping all the neurons, and then dumping all the synapses.

ToDo

@jacobeverist Despite my obsession with kets, superpositions and sequences, and now chunk notation, I think if I want anyone at all interested in this work, I am going to have to implement code to and from JSON notation. Any thoughts in that regard?

Follow up

I now have a bunch of methods to map back and forth between three different representations of my neural modules:

  • chunks, as defined above
  • python dictionaries
  • JSON format (a simple wrapper around the python dictionaries methods)

Further to this, I wrote a script that can convert my machines between chunk and JSON file formats.

It should be noted that it is basically impossible to hand type in JSON files, they are simply too verbose. So if anyone wants to manually write a collection of machines, they have to learn the chunk notation, which is about as terse as I can make it. The other option is to define a neural module in python, and then dump that out to your desired file format.

ToDo

I’m not super happy with the current organization of my code. It seems messy to me to stuff so much into the neural module class! Instead the from <-> as code should be in another python file, and then have only brief wrappers in the NM class.

Now with this all in place, I have a toy greeting Fred example in mind that I want to implement, and experiment with. More on this later, when I get some more free time.

Yet another update!

Heh, this is turning into a dev blog for me. Today I implemented another representation for compactly defining neural modules, this time inspired by the SDB associative memory system. Note that this new .map notation is not as general as my other representations (hence allowing it to be more compact), but for most cases it should be sufficient. Besides, you can load multiple different files and representations into a single module if needed anyway.

Here is an example of that older associative memory notation, for some sample knowledge about Sam:

spelling |Sam> => |S> . |a> . |m>
mother |Sam> => |Liz>
father |Sam> => |Tom>
friends |Sam> => |Fred> + |Emma> + |Max>

These learn rules, or associative triples take the general form

operator |ket> => sdb-sequence

Our new .map representation instead has the general form

sdb-sequence => clean-superposition

In which case our above knowledge about Sam becomes (approximately)

|S> . |a> . |m> => |Sam>
|op: mother> + |Sam> => |Liz>
|op: father> + |Sam> => |Tom>
|op: friends> + |Sam> => |Fred> + |Emma> + |Max>

There are a few differences though, and the mapping is definitely not one-to-one. In the original SDB notation, an operator applied to a ket returned the right hand side of the corresponding learn rule, which allows the result to be arbitrary sequences. In the new .map notation, the left hand side of a rule is some pattern to match, and the right hand side is a list of neurons to trigger. So for example, to trigger Sam’s mother we poke the |op: mother> and |Sam> neurons at the same time, resulting in the |Liz> neuron being activated. Likewise we can invoke Sam’s father, or friends in a similar manner. Further, if we poke |S> followed by one time-step, |a>, a time step, and then |m> the |Sam> neuron is invoked.

Here is the testing code for that

import synaptiflux as sf

if __name__ == '__main__':
    print('Testing the neural module from_map() method:')

    # load a sample file:
    filename = 'machines/Sam.map'
    NM = sf.NeuralModule('testing from_map method')
    NM.load_from_map(filename, verbose=True)

    # test poking:
    print()
    NM.poke_neurons(['Sam', 'op: mother'])
    NM.update_system(4)
    print()
    NM.poke_neurons(['Sam', 'op: friends'])
    NM.update_system(4)
    print()
    NM.poke_neuron('S')
    NM.update_system(1)
    NM.poke_neuron('a')
    NM.update_system(1)
    NM.poke_neuron('m')
    NM.update_system(3)
    print()

    # see what we have:
    print(NM)

The relevant output is given here. Where the number at the start of the line indicates the current time step of the system. So for example, we see Sam and the mother operator are invoked at time step 0, and then this triggers the Liz neuron and synapse at time step 1. Similarly for Sam’s friends at time steps 4 and 5. Finally, we see if we poke S, a, m in sequence, Sam is triggered at the 11’th time step.

0) Sam
0) op: mother
1) Liz

4) Sam
4) op: friends
5) Fred
5) Emma
5) Max

8) S
9) a
10) m
11) Sam

Here is the neural module from_map() method in full

    def from_map(self, s, verbose=False):
        """Load the map string s into the neural module."""
        # set some defaults:
        layer = 0
        synapse_number = 0
        for line in s.splitlines():
            line = line.strip()
            if len(line) == 0 or line.startswith('--'):
                continue
            # parse them:
            try:
                pattern, neurons = line.split(' => ', 1)
                coeffs, synapse_labels = parse_seq(pattern, synapse_number=synapse_number)
                neuron_names = parse_sp(neurons)[1]
                clean_synapse_labels = [strip_delay(s) for s in synapse_labels]
            except Exception as e:
                print(e)
                continue
            if verbose:
                print(f'\npattern: {pattern}')
                print(f'neurons: {neurons}')
                print(f'coeffs: {coeffs}')
                print(f'synapse_labels: {synapse_labels}')
                print(f'clean_synapse_labels: {clean_synapse_labels}')
                print(f'neuron_names: {neuron_names}')
            # now build the neurons:
            for neuron_name in neuron_names:
                if not self.do_you_know_neuron(neuron_name):
                    if verbose:
                        print(f'Unknown neuron: "{neuron_name}", adding it')
                        # self.add_neuron(self, name, layer, seed_pattern, synapse_labels, trigger_fn, trigger_params, pooling_fn, pooling_params)
                    self.add_neuron(neuron_name, layer, coeffs, synapse_labels, trigger_list_simm_threshold, {'threshold': 0.98}, pooling_or, {})
                else:
                    if verbose:
                        print(f'Known neuron: "{neuron_name}", appending to it')
                    # append_neuron_pattern(self, name, seed_pattern, synapse_labels, trigger_fn, trigger_params)
                    self.append_neuron_pattern(neuron_name, coeffs, synapse_labels, trigger_list_simm_threshold, {'threshold': 0.98}) # test this code section
                synapse_name = f'{neuron_name} S0' # hardwire in synapse number here for now.
                if not self.do_you_know_synapse(synapse_name):
                    # self.add_synapse(self, name, axon_name, synapse_fn_type, params, synapse_action_type, action_params)
                    # self.add_synapse(synapse_name, neuron_name, synapse_identity, {'sign': 1}, action_println, {'s': neuron_name})
                    self.add_synapse(synapse_name, neuron_name, synapse_identity, {'sign': 1}, action_time_step_println, {'s': neuron_name, 'NM': self}) # does this work?? Yup!
                    self.patch_in_new_synapses() # is this the best place for this?
            # now build the unknown synapses:
            for synapse_name in clean_synapse_labels:
                if not self.do_you_know_synapse(synapse_name):
                    if verbose:
                        print(f'Unknown synapse: "{synapse_name}"')
                    neuron_name = strip_synapse(synapse_name)
                    self.add_neuron(neuron_name, layer, [1], ['#OFF#'], trigger_list_simm_threshold, {'threshold': 0.98}, pooling_or, {})
                    # self.add_synapse(synapse_name, neuron_name, synapse_identity, {'sign': 1}, action_println, {'s': neuron_name})
                    self.add_synapse(synapse_name, neuron_name, synapse_identity, {'sign': 1}, action_time_step_println, {'s': neuron_name, 'NM': self}) # does this work?? Yup!
                    self.patch_in_new_synapses() # is this the best place for this?

TODO

The above .map notation maps synapse patterns to neurons. Surely we also need the sibling of this, where neurons map to synapses? Then perhaps we could do

|op: spelling> + |Sam> => |spelling of Sam>
|spelling of Sam> |=> |S> . |a> . |m>

where the first line invokes the spelling of Sam neuron, and the second line invokes S, a, m in sequence. Presumably the output would look something like this:

0) Sam
0) op: spelling
1) spelling of Sam
2) S
3) a
4) m
5) Sam

where the last line is triggered by this line from earlier:

|S> . |a> . |m> => |Sam>

Greeting Fred example

After implementing NM.poke_neuron_sequence() this morning, and NM.from_map() yesterday, I now finally have enough pieces in place to implement a Hello World! example, or in this case, a Greeting Fred example. In this project, poking a neuron means activating it for a single time step, so it fires independent of any other inputs to that neuron. While poking a sequence means poking neurons one after another, one time step at a time. In this particular example, this is invoked using a sequence of characters:

NM.poke_neuron_sequence(list('Hey Fred. Afternoon Robo'))

Map files currently consists of a single rule type, of form:

synapse-pattern => neuron(s)

Ie, if the given synapse-pattern is detected, then invoke the given neuron(s), combined with a synapse action of printing the current time step + the neuron name. The inclusion of the time step makes the time evolution of the system clearer.

Here is our greet-Fred.map example:

-- a Greet Fred map example:

-- the Greetings neuron:
|H> . |e> . |l> . |l> . |o> => |Greetings!>
|H> . |i> => |Greetings!>
|H> . |e> . |y> => |Greetings!>
|Day greeting> => |Greetings!>

-- the day greetings neuron:
|M> . |o> . |r> . |n> . |i> . |n> . |g> => |Day greeting>
|N> . |o> . |o> . |n> => |Day greeting>
|A> . |f> . |t> . |e> . |r> . |n> . |o> . |o> . |n> => |Day greeting>
|E> . |v> . |e> . |n> . |i> . |n> . |g> => |Day greeting>


-- the Fred Smith neuron:
|F> . |r> . |e> . |d> => |person: Fred Smith>
|F> . |r> . |e> . |d> . |d> . |i> . |e> => |person: Fred Smith>
|F> . |r> . |e> . |d> . | > . |S> . |m> . |i> . |t> . |h> => |person: Fred Smith>

-- the Fred Roberts neuron:
|F> . |r> . |e> . |d> => |person: Fred Roberts>
|R> . |o> . |b> . |o> => |person: Fred Roberts>
|F> . |r> . |e> . |d> . | > . |R> . |o> . |b> . |e> . |r> . |t> . |s> => |person: Fred Roberts>

Something interesting to note here is our pooling feature. For each of the defined neurons, we have multiple patterns that can invoke that neuron. Eg, the |Greetings!> neuron can be triggered by any of the patterns for Hello, Hi, Hey or the |Day greeting> neuron. Similarly the rest of our neurons can be triggered by multiple independent input patterns.

The poked input sequence Hey Fred. Afternoon Robo produces the following output, noting that the numbers are the time step at which that neuron was activated:

0) H
1) e
2) y
3) Greetings!
3)
4) F
5) r
6) e
7) d
8) person: Fred Smith
8) person: Fred Roberts
9)
10) A
11) f
12) t
13) e
14) r
15) n
16) o
17) o
18) n
19) Day greeting
19)
20) Greetings!
20) R
21) o
22) b
23) o
24) person: Fred Roberts

So we see

  • Hey triggers our Greetings! neuron
  • Fred triggers both Fred Smith and Fred Roberts
  • Afternoon triggers Day greeting
  • Day greeting triggers Greetings!
  • Robo triggers Fred Roberts

The inclusion of a time component in SynaptiFlux is a big differentiator from the previous SDB work. Note however that the above output could also be represented using SDB sequence notation, but using time-step + neuron-name is easier to read:

|H> . |e> . |y> . |Greetings!> + | > . |F> . |r> . |e> . |d> . |person: Fred Smith> + |person: Fred Roberts> . | > . |A> . |f> . |t> . etc

The associated example code is quite compact:

    import synaptiflux as sf

    # load a sample file:
    filename = 'machines/greet-Fred.map'
    NM = sf.NeuralModule('testing from_map method using Greet Fred machine')
    NM.load_from_map(filename, verbose=False)

    # test poking:
    print()
    NM.poke_neuron_sequence(list('Hey Fred. Afternoon Robo'))
    NM.update_system(30)
    print()

    # see what we have:
    # print(NM)

Improved our .from_map() method

OK. Our map notation now has two types of learn rules (hopefully we won’t need to add any further rule types):

synapses => neuron(s)
neuron(s) |=> synapses

For example, here is the spelling of Sam map file:

|spelling of Sam> |=> |S> . |a> . |m>
|op: spelling> + |Sam> => |spelling of Sam>
|S> . |a> . |m> => |Sam>

This was slightly complicated by the fact that our synapse names need to be unique. It would be infeasible to update all the neuron input patterns to take into account all the possible input synapses, so my solution was to implement an alias dictionary. Here it is for the spelling of Sam example:

Synapses:
    aliases:
        "S S0" <- {'S S0', 'spelling of Sam S0'}
        "a S0" <- {'spelling of Sam S1', 'a S0'}
        "m S0" <- {'m S0', 'spelling of Sam S2'}
        "spelling of Sam S0" <- {'spelling of Sam S0'}
        "spelling of Sam S1" <- {'spelling of Sam S1'}
        "spelling of Sam S2" <- {'spelling of Sam S2'}
        "op: spelling S0" <- {'op: spelling S0'}
        "Sam S0" <- {'Sam S0'}

In the back end we had to make a small change in our Neuron class, and in the update_axon method in particular. We replaced:

                    if label in synapses:
                        value = synapses[label].read_synapse(delay)

with

                    if label in synapse_alias_dict:
                        value = max(synapses[sublabel].read_synapse(delay) for sublabel in synapse_alias_dict[label])

So now if we poke (ie, activate) the op: spelling and the Sam neuron using

    import synaptiflux as sf

    filename0 = 'machines/spelling-Sam.map'
    NM0 = sf.NeuralModule('testing neuron synapse map learn rules')
    NM0.load_from_map(filename0, verbose=True)

    # see what we have:
    # print(NM0)

    print()
    NM0.poke_neurons(['Sam', 'op: spelling'])
    NM0.update_system(10)

we get the following output:

0)    op: spelling
0)    Sam
1)    S
2)    a
3)    m
4)    Sam

where

  • the number on the left is the current time step of the neural module
  • op: spelling + Sam invokes the sequence |S> . |a> . |m>
  • the sequence |S> . |a> . |m> invokes the neuron Sam

Our synapses can have coefficients too (ie, SDB style), which we demonstrate in the alpha-beta example

-- testing exact simm match:
|alpha beta> |=> 2|alpha> . 3|beta>

-- testing min simm match:
-- |alpha beta> |=> 5|alpha> . 7|beta>

-- show that the pattern is matched:
2|alpha> . 3|beta> => |alpha beta detected>

with the following output after poking the alpha beta neuron:

0)    2.0, alpha
1)    3.0, beta
2)    alpha beta detected

Where exact simm means the pattern and coefficients should match exactly (using our similarity measure), and min simm means the coefficients need to be at least those required for a match. The corresponding code is in the trigger_fn code file.

Today’s update: run_map.py

So, I implemented a wrapper around the NM.from_map() method, called run_map.py. This script adds a couple of new features to .map files, and allows us to run them. The newly available commands are:

  • – some comment line
  • print: some string
  • poke: poke a neuron sequence list, using SDB sequence notation
  • poke-list: poke a neuron sequence list, using Python list notation
  • update: update the neural module steps time steps
  • exit: exit from the current map file

If none of these match, then send the line of text to NM.from_map(line)

Of interest to me was to find a correspondence between my previous SDB associative memory and recall notation and our new SF notation.

Hence the following example

$ python3 run_map.py machines/Sam-family-and-friends.map

With full output:

$ python3 run_map.py machines/Sam-family-and-friends.map
Loading machines/Sam-family-and-friends.map

Testing the SDB <-> SF correspondence:
--------------------------------------

In this map file we try to make clear the mapping between SDB learn rules and our new SF map notation ...

SDB learn rules:
    mother |Sam> => |Liz>
    friends |Sam> => |Emma> + |Jane> + |Fred> + |Robert>
    father |Liz> => |Tom>
    age |Sam> => |22>
    age |Liz> => |57>
    age |Tom> => |87>
    age |Emma> => |25>
    age |Jane> => |23>
    age |Fred> => |21>
    age |Robert> => |27>
    friends |Emma> => |Julia> + |Jane> + |Mike>
    friends |Robert> => |Harry> + |George>

Testing the SDB <-> SF correspondence:
--------------------------------------

Sam's mother:
SDB: mother |Sam>
poke_list: [['op: mother', 'Sam']]
update: 4
0)    op: mother
0)    Sam
1)    Liz

Sam's friends:
SDB: friends |Sam>
poke_list: [['op: friends', 'Sam']]
update: 4
4)    Sam
4)    op: friends
5)    Emma
5)    Jane
5)    Fred
5)    Robert

Sam's mother's age:
SDB: age mother |Sam>
poke_list: [['op: mother', 'Sam'], ['op: age']]
update: 5
8)    op: mother
8)    Sam
9)    Liz
9)    op: age
10)    57

Sam's mother's father's age:
SDB: age father mother |Sam>
poke_list: [['op: mother', 'Sam'], ['op: father'], ['op: age']]
update: 6
13)    op: mother
13)    Sam
14)    Liz
14)    op: father
15)    Tom
15)    op: age
16)    87

Sam's friend's friend's:
SDB: friends friends |Sam>
poke_list: [['op: friends', 'Sam'], ['op: friends']]
update: 5
19)    Sam
19)    op: friends
20)    Emma
20)    Jane
20)    Fred
20)    Robert
20)    op: friends
21)    Jane
21)    Julia
21)    Mike
21)    Harry
21)    George

Sam's friend's age's:
SDB: age friends |Sam>
poke_list: [['op: friends', 'Sam'], ['op: age']]
update: 5
24)    Sam
24)    op: friends
25)    Emma
25)    Jane
25)    Fred
25)    Robert
25)    op: age
26)    25
26)    23
26)    21
26)    27

For comparison, here is that knowledge in .map, using synapses => neuron(s) learn rules:

|op: mother> + |Sam> => |Liz>
|op: friends> + |Sam> => |Emma> + |Jane> + |Fred> + |Robert>
|op: father> + |Liz> => |Tom>
|op: age> + |Sam> => |22>
|op: age> + |Liz> => |57>
|op: age> + |Tom> => |87>
|op: age> + |Emma> => |25>
|op: age> + |Jane> => |23>
|op: age> + |Fred> => |21>
|op: age> + |Robert> => |27>
|op: friends> + |Emma> => |Julia> + |Jane> + |Mike>
|op: friends> + |Robert> => |Harry> + |George>

Let’s briefly examine one of the above examples a little deeper:

Sam's mother's father's age:
SDB: age father mother |Sam>
poke_list: [['op: mother', 'Sam'], ['op: father'], ['op: age']]
update: 6
13)    op: mother
13)    Sam
14)    Liz
14)    op: father
15)    Tom
15)    op: age
16)    87

The question we are trying to ask is, given the knowledge entered into our system, what is the age of Sam's mother's father? In SDB that is answered with the query:

age father mother |Sam>

In our map files, this question is asked using the following command:

poke: |op: mother> + |Sam> . |op: father> . |op: age>

which is parsed into the following poke_list:

[['op: mother', 'Sam'], ['op: father'], ['op: age']]

If we then update the neural module by 6 time steps, we get the following answer:

13)    op: mother
13)    Sam
14)    Liz
14)    op: father
15)    Tom
15)    op: age
16)    87

Recall that the number on the left is the current time-step of the module, and the string on the right is the currently activated neurons/synapses. So we see:

  • op: mother + Sam at the same time step activates Liz
  • Liz + op: father at the same time step activates Tom
  • Tom + op: age at the same time step activates 87

So with this SDB <-> SF mapping in place, our toy SynaptiFlux is slowly getting more interesting.

ToDo

The main focus of the SDB work was to implement a large variety of operators that acted on superpositions and sequences. I have just started planning on how to add operators to SF. I will probably start with these common operators:

  • operator_clean sets all coeffs in a pattern to 1
  • operator_drop_below removes all elements from a pattern below some threshold
  • operator_normalize normalizes the sum of the coefficients in a pattern to be 1
  • operator_add adds two patterns together, SDB superposition style

Then probably add the following method to our Neuron class:

Neuron.apply_operator(pattern_number, operator, **params)

More details later! (when I know what they are …)