Calculated columns with a trigger

Some say that calculated columns is bad design because it does not follow the relational model. It is true but performance can be greatly enhanced.

In this simple example, we have a table with 3 columns. The first and the second are given by the user and the third is calculated by a trigger. We add a condition : x * 0 = 0 * x = 1 (This fantasy comes from the requirements of a customer).  It is done with the case expression.

Some say that calculated columns is bad design because it does not follow the relational model. It is true but performance can be greatly enhanced.

In this simple example, we add a condition : x * 0 = 0 * x = 1. It is done with the case construct.

drop table if exists tup;

create table tup (
     col1   numeric(10,2)
    ,col2   numeric(10,2)
    ,col3   numeric(10,2)
);

insert into tup(col1,col2) values
     (2.87      ,3.77)
    ,(4         ,5.11)
    ,(2.12      ,0)
    ,(0.0       ,3);

update tup
    set col3 =     (case col1 when 0 then 1 else col1 end)
                *  (case col2 when 0 then 1 else col2 end);

select * from tup;

will give

col1     col2   col3
---------------------
2.87     3.77   10.82
4.00     5.11   20.44
2.12     0.00    2.12
0.00     3.00    3.00

Let now add the trigger and the trigger function :

create or replace function calculate_columns() returns trigger as $$
begin

  new.col3 =   (case new.col1 when 0 then 1 else new.col1 end)
             * (case new.col2 when 0 then 1 else new.col2 end);

  return new;

end $$ language plpgsql;

create trigger calculated_columns  
    before insert or update 
    on tup
    for each row
    execute procedure calculate_columns();

Note that we do a before insert or update trigger and that we update the columns in the row new.

insert into tup(col1, col2) values
     (6.23      ,2)
    ,(0         ,55.11);

select * from tup;

will give :

col1     col2   col3
---------------------
2.87     3.77   10.82
4.00     5.11   20.44
2.12     0.00    2.12
0.00     3.00    3.00
6.23     2.00   12.46
0.00    55.11   55.11

The deal is in the bag!

How to safely delete records ?

To collect accurate data is expensive !

To structure data is very expensive !

To record accurate structured data is very very expensive !

Then…

Once data is recorded in a database, it is not a good idea to definitely delete it. It is better to archive the records we don’t want to see anymore in the database.

We will build a very simple framework for a soft delete with the PostgreSQL trigger functions. In a previous post, we developed an audit system that we will use again. We have just to modify the pre-delete trigger.

The components are :

  • a schema called ‘deleted’ having the same tables that the database
  • a trigger pre-delete on each table

and, inherited from the audit system :

  • an additional column on each table called audit_id
  • a table called audit_history

Let’s begin with the creation of the schema ‘deleted’ :

create or replace function public.create_schema_deleted() returns void as $$
declare 
 r	record;
 stmt	varchar;
begin

 execute 'create schema deleted;'
 
 for r in
  select 
   quote_ident(table_name) as table_name
  from 
   information_schema.tables 
  where 
   table_schema = 'public'

  loop

   stmt :=    'create table deleted.' 
     || r.table_name 
     || ' (like public.' 
     || r.table_name 
     || ')';

   execute stmt;

  end loop;
  
  execute 'grant usage on schema deleted to generic user'
  execute 'grant insert on all tables in schema deleted to generic_user;'

end $$ language 'plpgsql' volatile;

select public.create_schema_deleted();

This schema contains all the table of your production database without indexes, without pk/fk constraints, without sequences and without triggers. The role generic_user is granted to actual users.

Just after the creation of the schema, the tables are empty.

Each table has a column audit_id. It is a big serial that is incremented automatically when we insert a record in the production database.

The tables in the schema ‘deleted’ have of course also this column.

We create now the trigger function that will be executed each time an user deletes a record in the production database :

create or replace function public.pre_delete() returns trigger as $$
declare
 stmt 	varchar;
begin

 insert into public.audit_history
  (table_name, operation, audit_id, user_name, audit_date)
 values 
  (TG_TABLE_NAME, TG_OP, old.audit_id, current_user, now())

  stmt :=    'insert into deleted.'
   || quote_ident(TG_TABLE_NAME)
   || ' select * from public.'
   || quote_ident(TG_TABLE_NAME)
   || ' where audit_id = $1;';

  execute stmt using old.audit_id;
        
  return old;

end; $$ language 'plpgsql'; 

and a trigger pre-delete for each table :

create or replace function public.create_pre_delete_triggers() 
   returns void as $$
declare
 r	record;
 stmt	varchar;
begin
 for r in 
    select table_name 
    from information_schema.tables 
    where table_schema = 'public'
  loop

    stmt :=    'create trigger ' 
     || quote_ident(r.table_name || '_pre_delete')
     || ' after insert on public.' 
     || quote_ident(r.table_name)
     || ' for each row execute procedure public.pre_delete();';

    execute stmt;

  end loop;

end; $$ language 'plpgsql'; 

select public.create_pre_delete_triggers();

If we want to find a deleted record, we have a look in the table audit_history that has the following structure :

 id_audit_history  bigserial unique 
,table_name        text 
,operation         text 
,audit_id          bigint 
,user_name         text 
,audit_date        timestamp

And we do a select in the schema ‘deleted’ for this table and for this audit_id.

That’s simple !

By the way, when I was working on the script, I was listening one of the last masterworks of the Johann Sebastian Bach of our time : Runner of Steve Reich. I can listen this music for hours. This music enhances the intellect and purify the soul.

A simple audit system

Let’s suppose we want to audit all insert, update and delete committed on a database.

We create a table to store relevant data :

drop table if exists audit_history;
create table audit_history (
  id_audit_history       serial
 ,table_name             text
 ,operation              text
 ,audit_id               bigint
 ,user_name              text
 ,insertion_date         timestamp       default now()
);

drop sequence if exists audit_history_seq;
create sequence audit_history_seq;

We create two test tables with a pk constraints and one having a fk constraint to the other. Note that this fk constraint has the clause on delete cascade.

drop table if exists app_table_ref;
drop table if exists app_table;

create table app_table (
  pk             integer         primary key
 ,info           varchar
 ,audit_id       bigserial
);

create table app_table_ref (
  pk             integer         primary key
 ,info           varchar
 ,fk             integer         
          references app_table(pk)        
          on delete cascade
 ,audit_id       bigserial
);

Note the presence of the column audit_id, a bigserial that is not a pk.

We create now the trigger function, a generic fonction for the two triggers : after insert or update and before delete.

create or replace function populate_audit_history() 
   returns trigger as $$
begin

  if (TG_OP = 'INSERT' or TG_OP = 'UPDATE') then

      insert into audit_history
        (table_name, operation, audit_id, user_name)
      values
        (TG_TABLE_NAME, TG_OP, new.audit_id, current_user);

      return new;

  else
  
     insert into audit_history
        (table_name, operation, audit_id, user_name)
      values
        (TG_TABLE_NAME, TG_OP, old.audit_id, current_user);

     return old;

  end if;
end;
$$ language 'plpgsql';


create or replace function create_audit_triggers() 
   returns void as $$
declare
   r       record;
   stmt    varchar;
begin
  for r in select table_name from information_schema.tables 
              where table_name like 'app_table%' loop

     stmt :=    'create trigger '
             || quote_ident(r.table_name || '_audit_after_iu')
             || ' after insert or update on '
             || quote_ident(r.table_name)
             || ' for each row 
                    execute procedure populate_audit_history();';

     execute stmt;
   
     stmt :=    'create trigger '
             || quote_ident(r.table_name || '_audit_before_d')
             || ' before delete on '
             || quote_ident(r.table_name)
             || ' for each row 
                    execute procedure populate_audit_history();';

     execute stmt;

   end loop;
end;
$$ language 'plpgsql';

select create_audit_triggers();

Note the execute instruction that allows us to execute dynamic SQL statements.

We do now some transactions on those two tables :

insert into app_table values(23,'record 23');
insert into app_table values(56,'record 56');
insert into app_table values(71,'record 71');
insert into app_table values(82,'record 82');
insert into app_table values(85,'record 85');
insert into app_table values(91,'record 91');
insert into app_table values(94,'record 94');
insert into app_table values(97,'record 97');
insert into app_table values(99,'record 99');
update app_table set info = 'modified' where pk = 23;
update app_table set info = 'modified' where pk = 56;
update app_table set info = 'modified' where pk = 97;
delete from app_table where pk = 71;
insert into app_table values(101,'record 101');
insert into app_table values(121,'record 121');
insert into app_table values(167,'record 167');
update app_table set info = 'modified' where pk = 101;
delete from app_table where pk = 121;
insert into app_table_ref values(1, 'ref1', 23);
insert into app_table_ref values(2, 'ref2', 23);
insert into app_table_ref values(3, 'ref3', 82);
delete from app_table where pk = 23;

Here is the table audit_history :

1  | app_table     | INSERT |  1 | mchl | 2017-02-08 15:01:10.28164
2  | app_table     | INSERT |  2 | mchl | 2017-02-08 15:01:10.292839
3  | app_table     | INSERT |  3 | mchl | 2017-02-08 15:01:10.304088
4  | app_table     | INSERT |  4 | mchl | 2017-02-08 15:01:10.315051
5  | app_table     | INSERT |  5 | mchl | 2017-02-08 15:01:10.32618
6  | app_table     | INSERT |  6 | mchl | 2017-02-08 15:01:10.337284
7  | app_table     | INSERT |  7 | mchl | 2017-02-08 15:01:10.348366
8  | app_table     | INSERT |  8 | mchl | 2017-02-08 15:01:10.35954
9  | app_table     | INSERT |  9 | mchl | 2017-02-08 15:01:10.370595
10 | app_table     | UPDATE |  1 | mchl | 2017-02-08 15:01:10.381727
11 | app_table     | UPDATE |  2 | mchl | 2017-02-08 15:01:10.392824
12 | app_table     | UPDATE |  8 | mchl | 2017-02-08 15:01:10.403822
13 | app_table     | DELETE |  3 | mchl | 2017-02-08 15:01:10.414956
14 | app_table     | INSERT | 10 | mchl | 2017-02-08 15:01:10.4261
15 | app_table     | INSERT | 11 | mchl | 2017-02-08 15:01:10.437168
16 | app_table     | INSERT | 12 | mchl | 2017-02-08 15:01:10.448195
17 | app_table     | UPDATE | 10 | mchl | 2017-02-08 15:01:10.459344
18 | app_table     | DELETE | 11 | mchl | 2017-02-08 15:01:10.470463
19 | app_table_ref | INSERT |  1 | mchl | 2017-02-08 15:01:10.481536
20 | app_table_ref | INSERT |  2 | mchl | 2017-02-08 15:01:10.49259
21 | app_table_ref | INSERT |  3 | mchl | 2017-02-08 15:01:10.503579
22 | app_table     | DELETE |  1 | mchl | 2017-02-08 15:01:10.514913
23 | app_table_ref | DELETE |  1 | mchl | 2017-02-08 15:01:10.514913
24 | app_table_ref | DELETE |  2 | mchl | 2017-02-08 15:01:10.514913

And here the table app_table :

  56 | modified   |        2
  82 | record 82  |        4
  85 | record 85  |        5
  91 | record 91  |        6
  94 | record 94  |        7
  97 | modified   |        8
  99 | record 99  |        9
 101 | modified   |       10
 167 | record 167 |       12

and the table app_table_ref :

  3 | ref3 | 82 |        3

We note that the delete in app_table of the record having pk = 23 has deleted also the two records of app_table_ref having fk = 23.

This design is very handy because you can add auditing on an existing database by adding a to each table : 1/  a column serial (that is not a pk) and 2/  the two triggers (after the creation of the generic function).

The applications are not touched and the referential integrity of the database is preserved.

Having for each modifications, the name of the table and the value of the audit_id, we can find, for insert and update, the rows that have changed.

Trees (3) : ltree extension of PostgreSQL

We created the mind map of the book “The pedagogy of the Ramchal” with WiseMapping. In the future, we will develop our own visual mind mapping tool bs”d.

We want now

  • to import this mind map in our database using the ltree extension of PostgreSQL, a implementation of trees with materialized paths, pre-calculated paths that describe ancestry of nodes
  • to encapsulate this ltree in RelGraph, our relational data model for graphs
  • to draw the tree with a R script

We export the mind map in text format and we remove the leading spaces in vi with :

:%le

Then we load raw data in a staging table because it is often a good idea to use the expressive power of SQL to parse and arrange raw data :

# we start psql with user postgres (needed by copy)
sudo -i -u postgres psql -d higayone

# we do a bulk copy
copy tree_staging from '/home/mchl/Desktop/Blog/map.txt';

The table tree_staging is like that :

input_line
------------------------------------------------------------------------------
1 The pedagogy of the Ramchal
1.1 The man
1.1.1 His life
1.1.1.1 Padua
1.1.1.1.1 Yeshayah Bassan
1.1.1.1.2 Isaac Cantarini
1.1.1.1.3 Aviad Sar Shalom Basilea
1.1.1.2 Amsterdam
1.1.1.2.1 Talmidim
1.1.1.2.1.1 Marranes
1.1.1.2.1.2 Modern science
1.1.1.3 Eretz Israel
...

Then we create extension ltree :

create extension if not exists ltree;

Then we create table and sequence for ltree :

drop table if exists ltree_node;
drop sequence if exists ltree_node_sequence;
create sequence ltree_node_sequence;

create table ltree_node (
 node_id integer primary key
    default nextval('ltree_node_sequence')
 ,node_name text
 ,node_path ltree
);

It is necessary to drop the table before the sequence, otherwise PostgreSQL will raise an error. Then we add a gist index on the column ltree  :

create index path_gist_idx on ltree_node using gist(node_path);

Then we populate the ltree with the data of the staging table :

insert into ltree_node(node_name, node_path) select
 substring(input_line, 
    char_length(substring(
        input_line from '[0-9.]*[ ]')) + 1) as node_name
 ,cast(substring(input_line from '[0-9.]*') as ltree) as node_path
from tree_staging;

The line

substring(input_line, 
    char_length(substring(
        input_line from '[0-9.]*[ ]')) + 1) as node_name

means that we take every character since the beginning of the line until the first white space. And the line

cast(substring(input_line from '[0-9.]*') as ltree) as node_path

means that we take every character since the first white space until the end of the line.

Let’s have now a look on a first application of ltree using the <@ operator. This query gives all the paths of the tree :

 select
 ln1.node_id
 ,array_to_string(
    array_agg(ln2.node_name order by ln2.node_path)
              ,'/') as full_path_name
from  
 ltree_node as ln1 
inner join 
 ltree_node as ln2 
on 
 ln1.node_path <@ ln2.node_path 
group by 
 ln1.node_id, ln1.node_path, ln1.node_name
order by 
 ln1.node_id, full_path_name;

The line

ln1.node_path <@ ln2.node_path

means that the left argument is a descendant of the right one. And array_agg is an aggregate function (working with group by) transforming his arguments into an array.

The output is like that :

node_id full_path_name
------- --------------
1       The pedagogy of the Ramchal
2       The pedagogy of the Ramchal/The man
3       The pedagogy of the Ramchal/The man/His life
4       The pedagogy of the Ramchal/The man/His life/Padua
5       The pedagogy of the Ramchal/The man/His life/Padua/Yeshayah Bassan
6       The pedagogy of the Ramchal/The man/His life/Padua/Isaac Cantarini
7       The pedagogy of the Ramchal/The man/His life/Padua/Aviad Sar Shalom Basilea
8       The pedagogy of the Ramchal/The man/His life/Amsterdam
9       The pedagogy of the Ramchal/The man/His life/Amsterdam/Talmidim
10      The pedagogy of the Ramchal/The man/His life/Amsterdam/Talmidim/Marranes
11      The pedagogy of the Ramchal/The man/His life/Amsterdam/Talmidim/Modern science
12      The pedagogy of the Ramchal/The man/His life/Eretz Israel
...

We populate now the tables of RelGraph, our relational model for graphs.

1. We create a new graph :

insert into graph(graph_name) values('Book');

2. We populate the table node :

insert into node(graph_id, node_name) select
 (select graph_id from graph where graph_name = 'Book')
 ,node_name
from
 ltree_node;

3. We populate the table edge :

insert into edge(from_node, to_node) select
 ln2.node_id as from_node
 ,ln1.node_id as to_node
from
 ltree_node ln1
 ,ltree_node ln2
where
 ln1.node_path = subpath(ln2.node_path
                         ,0
                         ,nlevel(ln2.node_path) - 1);

We see here two other functions of ltree :

  • subpath : subpath of tree starting at the beginning and starting at the penultimate component
  • nlevel : gives the number of components of the path

But we have to find solutions to make visible in RelGraph the insert / update / delete performed in the ltree. It will be the topic of a future post.

Having node and edge populated, we can run the following R script (which, in my private language, I call a graph slurper) to visualize the tree :

require(visNetwork)
require(RPostgreSQL) 

# establish connection 
driver ← dbDriver("PostgreSQL") 
connexion ← dbConnect(driver, dbname="higayone"
                      ,host="localhost", port=5432
                      ,user="mchl", password="secret")                              

# populates nodes and egdes
nodes ← dbGetQuery(connexion, "
 select 
   node_id as id
  ,node_name as label
 from node
 order by node_id;") 

edges ← dbGetQuery(connexion, "
 select 
   from_node as from
  ,to_node as to
 from edge
 order by from_node, to_node;") 

visNetwork(nodes, edges) %>%
  visNodes(font = '10px', shape = 'ellipse')

 

Here is a thumbnail (I am afraid I cannot do better with the free version of WordPress)  :

Book

If we want to generate a table of contents, we must think at two things :

  • node_path ‘1’ becoming the title of the book, it must be removed from the tree
  • for all other node_path, we simply remove the ‘1.’ at the beginning
select
 substring(node_path::text, 3, char_length(node_path::text))
 ,repeat('   ', nlevel(node_path)- 2) || node_name
from
 ltree_node
where node_path != '1'
order by node_path;

That gives :

1           The man
1.1            His life
1.1.1             Padua
1.1.1.1              Yeshayah Bassan
1.1.1.2              Isaac Cantarini
1.1.1.3              Aviad Sar Shalom Basilea
1.1.2             Amsterdam
1.1.2.1              Talmidim
1.1.2.1.1               Marranes
1.1.2.1.2               Modern science
1.1.3             Eretz Israel
...

RelGraph : a relational model for graphs

Terminology

{ graphs } = { trees } ∪ { networks }

A graph = { nodes } ∪ { edges }

Why a relational model for graphs ?

It is more natural to implement graph structures with a graph oriented database. But we have good reasons to use a relational database as explained here.

The Predicate Calculus is a very simple theory : it is the basic set theory that every child knows. Of course, with the relational model, we are obligated to follow the strict discipline of normal forms. But we see the benefits when we write queries because the Predicate Calculus is a very elegant theory : the queries are naturally deductible from the data model. It is like geometry : with a good system of axes, the equations can be greatly simplified.

How to represent a graph with relations

A graph is the union of two sets : the set of nodes and the set of edges. It is natural to represent a graph by two relations (two tables in SQL jargon).

node(id, info)
edge(from_node, to_node, name, info)

where from_node and to_node (the nodes connected in this order) are FKs to node_id.

The actual implementation can be done by a specialized library. Something like ltree for trees in PostgreSQL. The implementation is encapsulated in our model. The relation between PKs and FKs is hidden. It is of course not necessary that this implementation follows the relational model. The application developer sees usual relational tables.

Functions

To node and edge,  we would like to ask a lot of things :

  • Select/insert/update/delete nodes, edges and subgraphs
  • Enumerate nodes and edges
  • List all nodes pointing to a given node
  • List all nodes pointed by a given node
  • Do you find cycles ? or trees ?
  • Do  you find disconnected graphs or isolated nodes ?
  • If the graph is a network, build spanning trees

For a tree, we would like to ask :

  • Do a breadth-first or a depth-first search
  • Give lexicographic notation (1, 1.1, 1.1.1,…)
  • Delete a subtree starting at a given node
  • Give all the ancestors of descendants of a given node

Associations and groups

Having interactive visualizations in our agenda, we need two other relations : associations and groups.

Association : we would like to connect nodes with specific edges.

association(from_node, to_node, name, info)

Groups : we would like to group nodes having a common property. We don’t forget that a node can be belong to several groups : there is a relation many-to-many between the relations node and groups. We need a go-between relation populated with the PKs of both relations.

node(id, name, nfo)
node-grouping(node_id, grouping_id)
grouping(id, name, info)

where node_id is a FK to node(id) and grouping_id is a FK to grouping(id).

The logical model

We add a relation graph that keeps track of all the graphs created in the schema. In such a way, all graphs use the same basic relations that are created only once.

graph(id, name, info)

node(id, graph_id, name, info)

edge(edge_id, from_node, to_node, name, info)

association(from_node, to_node, name, info)

node_grouping(node_id, grouping_id)

group(grouping_id, name, info)

graph
With the relations node, edge, association and groups well populated, we can expect

  • to have access to data from a programming language in a very direct way
  • find a visualization library for graphs that will interpret our data structures in a very direct way

The physical model

create sequence graph_sequence;
create table graph (
 id integer not null
    default nextval('graph_sequence')
    primary key
 ,name text
 ,info text
);

create sequence node_sequence; 
create table node ( 
 id integer not null 
    default nextval('node_sequence') 
    primary key
 ,graph_id integer not null
    references graph(id) 
 ,name text
 ,info text
); 

create sequence edge_sequence;
create table edge (
 id integer not null
    default nextval('edge_sequence') 
 ,from_node integer not null 
    references node(id)
    on update cascade 
    on delete cascade
 ,to_node integer not null 
    references node(id)  
    on update cascade 
    on delete cascade
 ,primary key(from_node, to_node)
); 

create sequence association_sequence; 
create table association ( 
 id integer not null 
    default nextval('association_sequence') 
    primary key 
 ,from_node integer not null 
    references node(node_id) 
 ,to_node integer not null 
    references node(node_id) 
 ,name text
 ,info text 
); 

create sequence grouping_sequence; 
create table grouping ( 
 grouping_id integer not null 
    default nextval('grouping_sequence') 
    primary key 
 ,name text 
 ,info text
); 

create table node_grouping ( 
 node_id integer not null 
    references node(id) 
 ,grouping_id integer not null 
    references grouping(id) 
 ,primary key(node_id, grouping_id)
);

 

Trees (2) : visualization with R

 Rplot

This mind map represents the following hierarchy of topics we used in a previous post (this one is small and inactive but with the visNetwork graph visualization R package, it will possible to develop animated and interactive mind maps) :

Root
            Relational theory
                      First-order predicate logic
                                Syntax
                                Rules of inference
                                Deductive systems
                                Semantics
                      Three-valued logic  
                      Normal forms
                                Functional dependencies
                                Multivalued dependencies
                                1NF
                                2NF
                                3NF
                                BCNF
                                4NF
                                5NF
                                DKNF
           SQL
                      Types and domains
                      PKs & FKs
                      Relations
                      Operators
                            Restriction
                            Projection
                            Joins
                               Cross join
                               Inner join
                               Left outer join
                               Right outer join
                               Full outer join
                            Union
                            Intersection
                            Difference
                            Divide
                      Aggregations
                      Constraints
                      Views

 

We changed the data structure for the implementation of Adjacency List Model for trees with relational operators.

Instead of

tree(node_id, node_up_id, topic) 
    where node_up_id is a reference (FK) to node_id

we use a two-tables representation of  a tree :

nodes(node_id, topic)

edges(from_node, to_node) 
    where from_node and to_node are references (FKs) to nodes.node_id

It’s always better to use normalized data model. It takes often a bit of effort but the queries will be easier to design.

We implement in PostgreSQL this model :

create sequence node_sequence;

create table nodes (
 node_id integer not null 
    default nextval('node_sequence') 
    primary key
 ,topic varchar(128)
);

create table edges (
 from_node integer not null 
    references nodes(node_id)
    on update cascade
 ,to_node integer not null
    references nodes(node_id)
 check (from_node != to_node)
 ,primary key(from_node, to_node)
 );

The visualization of the mind_map with the visGraph R library is now straightforward.

1. We load the libraries and we make connection to the database :

require(RPostgreSQL)
require(visNetwork)

driver ← dbDriver("PostgreSQL") 
connexion ← dbConnect(driver,
 ,dbname="higayone"         
 ,host="localhost", port=5432
 ,user="mchl", password="secret")                 

2. We populate nodes and edges, the R data structure is a direct mapping of the relational structure (a R data frame is relational table) :

nodes ← dbGetQuery(connexion, "
 select
   node_id as id
  ,topic as label
 from nodes
 order by id_topic;")

edges ← dbGetQuery(connexion, "
 select
   from_node as from
  ,to_node as to
 from edges
 order by to_node, from_node;")

3. We call the graphic interface, the visNetwork data structure is direct mapping of the R data structure (visNetwork function waits nodes identified by id and label and edges identified by from and to) :

visNetwork(nodes, edges) %>%
 visNodes(font = '10px', shape = 'ellipse')