| Age | Commit message (Collapse) | Author | 
|---|
|  |  | 
|  |  | 
|  |  | 
|  |  | 
|  | network is updated
The Apache web server will be restarted daily to free memory.
sudo service apache2 restart | 
|  |  | 
|  |  | 
|  | running update_network_by_force.py | 
|  | Compute an edge's strength on the fly instead of saving everything and then computing the net strength.
The new function make_new_edge2 will replace make_new_edge. | 
|  |  | 
|  |  | 
|  | When merging many big edge files, the computer may run out of memory.
Save the edge files that have been considered thus far and figure out where merging stopped. | 
|  |  | 
|  |  | 
|  |  | 
|  |  | 
|  |  | 
|  | Use a combination of target gene ID and tf gene ID as a key.  So if we having the following:
Target: AT5G09445 AT5G09445
TF: AT1G53910 RAP2.12
Then the key will be "AT5G09445_AT1G53910".
Before it was "AT5G09445 AT5G09445 AT1G53910 RAP2.12".  This is OK in most cases, as long a gene ID's corresponding gene name is consistent.  But if "AT1G53910" has a different gene
name, then we will have a DIFFERENT key, which is not what we want. | 
|  |  | 
|  | starts with 'edegs' | 
|  | to gzip a file inside it. | 
|  |  | 
|  | edges.txt
Visit the following link to update two pickle files used the Webapp, G.pickle and SOURCE_NODES.pickle.
http://118.25.96.118/brain/before
The visit could be done using the command line tool curl, as follows:
curl http://118.25.96.118/brain/before
-Hui | 
|  | edges.sqlite
When I saved a static html page for each edge (e.g., http://118.25.96.118/static/edges/AT1G20910_AT1G30100_0.html), it took
5GB disk space for saving 1 million html pages.  Not very disk space efficient.
An alternative is to save all edge information in a database table (i.e., edge), and query this database table for a
particular edge.
The database file edges.sqlite takes less than 200MB for 1 million edges, requiring 10 times smaller space than the static
approach.  The reason is that we do not have a lot of HTML tags in the database.  Quite happy about that, though it seems
that filling a database is a bit slower (2 hours??? for 1 million rows).
Also updated two files that were affected: update_network.py and update_network_by_force.py.  Now instead of copying 1 million
static html page to the Webapp, I just need to copy edges.sqlite to static/edges/.  Faster.
In the Webapp, I updated start_webapp.py and added a file templates/edge.html for handling dynamic page generation.
-Hui | 
|  | If there is no enough space left in the disk, download_and_map.py will refuse to download any data.
This can be quite mysterious for a maintainer.
So, write the reason to the network log file.
The reason is something like:
"[download_and_map.py] home directory does not have enough space (only 13 G available)."
-Hui | 
|  | The purpose of duniq is to avoid duplicated edge lines.
Now, just make sure we don't insert the same tuple.
-Hui | 
|  | It would be interesting to see how edges' association strengths change over time, as time is an input variable for the
function that computes the association strength. | 
|  | Define a function copy_and_backup_file(src_file, dest_dir) to do backup and compression work.
The function copy_and_backup_file is used in update_network_by_force.py.
-Hui | 
|  | Fixed a bug.
Now I close figure (plt.close()) before creating a new one, to avoid that the current figure is drawn
on top of the old one.
-Hui | 
|  | might be responsible for thermomorphogenesis
Use networkx and matplotlib.
Reference:
Quint et al. (2016) Molecular and genetic control of plant thermomorphogenesis.  Nature Plants. | 
|  |  | 
|  |  | 
|  | the head comments. | 
|  |  |