Age | Commit message (Collapse) | Author |
|
|
|
|
|
|
|
remove duplicate write_log_file()
|
|
|
|
|
|
|
|
|
|
|
|
executed as a cron job.
|
|
|
|
number of arguments.
|
|
|
|
network is updated
The Apache web server will be restarted daily to free memory.
sudo service apache2 restart
|
|
running update_network_by_force.py
|
|
edges.txt
Visit the following link to update two pickle files used the Webapp, G.pickle and SOURCE_NODES.pickle.
http://118.25.96.118/brain/before
The visit could be done using the command line tool curl, as follows:
curl http://118.25.96.118/brain/before
-Hui
|
|
edges.sqlite
When I saved a static html page for each edge (e.g., http://118.25.96.118/static/edges/AT1G20910_AT1G30100_0.html), it took
5GB disk space for saving 1 million html pages. Not very disk space efficient.
An alternative is to save all edge information in a database table (i.e., edge), and query this database table for a
particular edge.
The database file edges.sqlite takes less than 200MB for 1 million edges, requiring 10 times smaller space than the static
approach. The reason is that we do not have a lot of HTML tags in the database. Quite happy about that, though it seems
that filling a database is a bit slower (2 hours??? for 1 million rows).
Also updated two files that were affected: update_network.py and update_network_by_force.py. Now instead of copying 1 million
static html page to the Webapp, I just need to copy edges.sqlite to static/edges/. Faster.
In the Webapp, I updated start_webapp.py and added a file templates/edge.html for handling dynamic page generation.
-Hui
|
|
It would be interesting to see how edges' association strengths change over time, as time is an input variable for the
function that computes the association strength.
|
|
Define a function copy_and_backup_file(src_file, dest_dir) to do backup and compression work.
The function copy_and_backup_file is used in update_network_by_force.py.
-Hui
|
|
|