Language Syntax Timeseries & Extension Experimental Examples Articles Sources Specifications Playground Home Contact

Experimental functions

conv

1D or 2D Convolution for input by filter, nice Convolution arithmetic explanation

(conv in filter {pad {stride {dilation}}})

parameter

examples

Sobel Horizontal

Sobel operator applied on 1st component (red) of an image.
Shows original tensor, filter kernel and result.

(def
  ;get 1st (red) component of image
  gray (get (import "file://~/http_server/images/tix_color.png") 0)
  sobel-x (tensor (shape 3 3) [
    -1 0 1
    -2 0 2
    -1 0 1])
  sobel-y (transpose sobel-x)
)
[
  gray
  sobel-x
  (conv gray sobel-x)
]

Sobel gradient magnitude

Applies horizontal and veritcal Sobel operator on 1st component (red) of an image.
Combined together to find the absolute magnitude of the gradient at each point.

(def
  ;get 1st (red) component of image
  gray (get (import "file://~/http_server/images/tix_color.png") 0)
  sobel-x (tensor (shape 3 3) [-1 0 1 -2 0 2 -1 0 1])
  sobel-y (transpose sobel-x)
)
[
  gray
  (sqrt
    (+
      (sqr (conv gray sobel-x))
      (sqr (conv gray sobel-y))))
]

Sobel RGB combined magnitudes

Computes Sobel magnitude on each RGB channel.
Then combines in a 3D tensor (RGB x X x Y) to have a color image.

(def
  tix (import "file://~/http_server/images/tix_color.png")
  sobel-x (tensor (shape 3 3) [-1 0 1 -2 0 2 -1 0 1])
  sobel-y (transpose sobel-x)
)
(defn one-channel[img filter c]
  ;apply filter on given channel
  (conv (get img c) filter)
)
(defn sobel-gradient[img c]
  ;compute sobel gradient on given channel
  (sqrt
    (+
      (sqr (one-channel img sobel-x c))
      (sqr (one-channel img sobel-y c))))  
)
[
  tix
  ;combine gradient per channel
  ;this creates an RGB image
  (tensor 
    (sobel-gradient tix 0)
    (sobel-gradient tix 1)
    (sobel-gradient tix 2))
]

covariance

Covariance matrix (tensor of size 2) between timeseries. Input timerseries are synchronized, for missing point they are considered piecewise constant.

(covariance ts...)

parameter

example

Show covariance matrix between temperatures coming from 10 meteo stations. Latitude must be West (< 0) and timeserie length must be > 1000. Slice is used to compute covariance only for 10 first stations.

WARNING can be slow to compute (>10s): 30 Millions points explored.

(def start 2016-01-01 stop 2018-12-31)
(defn is-usable[code]
  (cond
    (< (vget (timeserie @"lon" "meteonet" code start stop)) 0) false
    (< (len (timeserie @"t" "meteonet" code start stop)) 1000) false
    true))
(def stations
  (value-as-array (slice (keep (perimeter "meteonet" start) is-usable) [0 9])))
(defn ts[code]
  (timeserie @"t" "meteonet" code start stop))
(covariance (map ts stations))

dropout

Dropout randomly replace input value by zero with given probability.

Only called during learning, does nothing if called directly.

(dropout in p)

parameter

import

Import resource from a URI(URL). Still in developpement, only PNG, JPEG and GIF images are imported as a RGBA tensor.

(import uri {checksum})

parameter

examples

Import Strigi-Form small logo as a RGBA tensor and show it as an image.
URL describes local LispTick home images folder.

(import "file://~/http_server/images/logo_symbole_1x.png")

Same source image from official LispTick, but keep only channel 0 index, the RED.

(get (import "https://lisptick.org/images/logo_symbole_1x.png") 0)

maxpool

1D or 2D MaxPool replaces input by using maximum value in a neighborhood.
Used to reduce input dimensionality.

(maxpool in kernel pad stride)

parameter

examples

Maximum

Replaces value by maximum in a 2x2 neighborhood.

(def
  ;get 1st (red) component of image
  gray (get (import "file://~/http_server/images/tix_color.png") 0)
)
[
  gray
  (maxpool gray 2)
]

Reduce size

Reduces input dimensionality by 2 using maximum in a 2x2 neighborhood.
Zero padding, stride 2 so reduction is by 2.

(def
  ;get 1st (red) component of image
  gray (get (import "file://~/http_server/images/tix_color.png") 0)
)
[
  gray
  (maxpool gray 2 0 2)
]

shape

Shape of tensor, its an array of int representating each dimension size.

See tensor for examples.

(shape arg1 {arg2 {arg3...}})

parameter

solve

Cost function optimizer by stochastic gradient descent.

Internally LispTick uses Gorgonia package. All optimizer models, their options and default values are mapped to Gorgonia models and default values.

(solve [learn1 ...] [ts1...] cost epochs [model {(option . value)...})

parameter

Available models, their optional arguments and default value

example

This example shows how to describe, lear and use a NN with one hidden layer to learn simple function like cosinus. You can play with it and change hidden layer size, target functions…

Used solver is ADAM, Adaptive Moment Estimation (see paper).

(def
  pi      3.14159265359 ;π
  hidden  8
  size    10000

  ;randomly initialized weights
  w0 (tensor (shape hidden 1) (fn[x] (rand-g 0 1)))
  b0 (tensor (shape hidden 1) (fn[x] (rand-g 0 1)))
  w1 (tensor (shape 1 hidden) (fn[x] (rand-g 0 1)))

  start 2019-01-01
)
(def
  ;timeserie of size random value between -π and π
  ts_angle
    (timeserie
      (range start (+ (* (- size 1) 1h) start) 1h)
      (fn[t] (rand-u (* -1 pi) pi t)))
  ;target timeserie, simply input cosinus
  ts_target (cos ts_angle)
)

;Neural Network transfert function with one hidden layer
(defn transfert[x]
  (mat* w1 (sigmoid (+ b0 (mat* w0 x)))))
;cost function, square error
(defn cost[x ref]
  (+ (sqr (- (transfert x) ref))))

;trick to call solver by looking at last value
(vget (last
  (solve
    ["w0" "b0" "w1"]
    [ts_angle ts_target]
    cost
    2 ;few epochs
    ["adam"])))

;use learned NN to compute cosinus!
(transfert 0)

svd-s

Singular Value Decomposition, singular values.

Nice article on how to use it for Machine Leaning.

(svd-s matrix)

parameter

svd-u

Singular Value Decomposition, U orthonormal base.

Nice article on how to use it for Machine Leaning.

(svd-s matrix)

parameter

svd-v

Singular Value Decomposition, V orthonormal base.

Nice article on how to use it for Machine Leaning.

(svd-s matrix)

parameter

tensor

Creates a tensor, generaly form 1D to 4D.

(tensor shape {values|fn})

Or combine several tensors to create a higher dimension tensor.
Each tensor must have the same shape, result will be n x tensor_dim.

(tensor t1 .. tn)

parameter

examples

Hard coded 2D matrices:

(tensor (shape 2 3) [1 2 4 3 6 0])
(tensor (shape 3 2) [1 2 4 3 6 0])

Index as value with an anonymous identity function:

(tensor (shape 3 2) (fn[i] i))

Randomly generated values with rand-g, index unused:

(tensor (shape 8 16) (fn[i] (rand-g 0 1)))

transpose

Tensor tranposition.

(transpose tensor)

parameter

example

(def t 
  (tensor (shape 2 3) [1 2 4 3 6 0]))
(transpose t)