Pi Cluster Parallel Script First Fire

Today was spent “slaving over a hot computer”. Part of that was getting GNU parallel to work with a couple of commands.

I first mentioned GNU parallel command here:

https://chiefio.wordpress.com/2017/11/10/parallel-scripts-clusters-easy-use/

That page includes some on how to install it and a video on basic operation. Today I decided to make a couple of scripts that would automatically spread work around to the cluster instead of me needing to type arcane command lines. I started by reviewing the tutorial and getting a basic example onto my desktop. That was fairly easy.

I chose to have a naming convention that any command I added to my $HOME/bin would have p_ as the first letters if it was a parallel command that just used my local cores, and ps_ if it was a parallel across systems command spreading things over the whole cluster of systems. In this way, I have to know a command is going to use parallel when I’m typing the name.

The example in the video used gzip and gunzip. Since I’ll often need to gzip or gunzip whole directories full of stuff, I thought I’d start with a p_gzip and a p_gunzip. They worked relatively well and somewhat easily.

Here’s the p_gzip text:

parallel -j0 --eta --progress gzip {}

It can be used by doing an ls to list the files you want zipped and send their names into standard-in like:

ls junk* | p_gzip 

The command says to do things in parallel, the -j0 says use all cores available (in this case 4 on the Pi M3) so doing 4 gzips at a time, give me status on progress and eta to complete if it takes long, and run the gzip command on each name that comes in standard in.

When I first ran it, I thought I’d failed. I was staring at my monitor screen and saw no increase of activity. I expected to see the 4 cores load up. (That’s when I added some of the status information). Doing an “ls” command showed all the files were gziped. What? Yeah, it finished fast. I’m working in a directory where I’d copied all my log files. Guess it wasn’t enough load.

chiefio@DevuanPiM2:~/pp$ ls | wc -l
64
chiefio@DevuanPiM2:~/pp$ du -ks .
572

64 files, compressed size 572 kB.

I then ran it with “time” to see just how long it was taking:

chiefio@DevuanPiM2:~/pp$ time ls * | p_gzip

Computers / CPU cores / Max jobs to run
1:local / 4 / 64

Computer:jobs running/jobs completed/%of started jobs/Average seconds to complete
ETA: 0s 0left 0.00avg  local:0/64/100%/0.0s 

real	0m1.671s
user	0m1.510s
sys	0m0.590s

Oh. 1 2/3 seconds. No wonder I didn’t see a blip on the load… Notice the top status report. 1 computer, the local one (i.e. my desktop) with 4 cores running 64 jobs. As all of them total only took 1.6 seconds, each individual job was too little to show up on the htop monitor. I need bigger files to compress ;-)

Clearly from this example it is Real Easy to use parallel to load up all the cores on your local computer for the various tasks that are annoying to systems admin types, but need doing.

Here’s the equivalent parallel gunzip:

ls *.gz | parallel -j0 --eta --progress gunzip 

For this one I chose to put the “ls” inside the script itself. Just gunzip anything that has a .gz suffix in a given directory. How long did it take to unzip all those 64 files?

chiefio@DevuanPiM2:~/pp$ time p_gunzip

Computers / CPU cores / Max jobs to run
1:local / 4 / 64

Computer:jobs running/jobs completed/%of started jobs/Average seconds to complete
ETA: 0s 0left 0.00avg  local:0/64/100%/0.0s 

real	0m1.735s
user	0m1.210s
sys	0m0.770s

About the same. Notice the “ETA:” line. 0 seconds to complete, 0 left to do. By the time it got set up for the first ETA report, it was done. 64 jobs and 100% done on the local machine.

Clearly I needed something that would load up a core so I could see it actually doing something. I have such a “do nothing” load script, named load.

while true
do
ANS="foo"
done

All it does is repeatedly stuff “foo” into the variable ANS forever in a loop. Locks up a CPU Core at 100% very nicely.

chiefio@DevuanPiM2:~/pp$ bcat p_load
ls ${1-d*} | parallel -j0 --eta load 

This, run in my parallel program testing directory, is doing an ls of all the files starting with a ‘d’ (that’s about 20 of them with names like daemon.log.1) as a cheap hack to let me vary the number of invocations of the “load” script. Works a champ, and here’s the display it gives when run:

chiefio@DevuanPiM2:~/pp$ p_load

Computers / CPU cores / Max jobs to run
1:local / 4 / 20

Computer:jobs running/jobs completed/%of started jobs/Average seconds to complete

Note that it never completes and so can’t really estimate when it will complete. A dirty trick to play on –eta ;-)

1 computer, the local one, with 4 cores in use, and 20 instances of “load” pegging it at 100% loaded.

Great! I thought, now lets just bring in the rest of the cluster.

It Is Much More Complicated To Cluster Parallel

While not all that hard for an old hand at Linux / Unix, it is, IMHO, more complicated than it needs to be for simple things (while letting you do all sorts of complicated things no doubt…)

When I ran my first test case I got something like 64 prompts for my login / password. Ooops.

First off, the underlying transport to the distributed cluster nodes is SSH so you must have it installed AND have a shared key set to let you log in without password prompting. Oh Joy. I got to go through the generate a key pair and copy it to each machine in the cluster exercise. Gee, that would be a great thing for a parallel automated task… oh, right…

Here’s a helpful page on how to do that:

https://www.ostechnix.com/configure-passwordless-ssh-login-linux/

It has you install openssh. Debian / Devuan already has ssh for me so I just skipped that step and dropped down to:

Generate SSH keypair on local system

ssh-keygen creates a keypair, private and public keys. The private key should be kept secret. You shouldn’t disclose it to anyone else. And, the public key should be shared with the remote systems that you want to access via ssh.

Important note: Do not generate keypair as root, as only root would be able to use those keys. Create key pairs as normal user.

Run the following command in local system (Ubuntu in my case) to generate the keypair.

ssh-keygen

So on my “headend” machine, logged in as “me” (chiefio) I did that ssh-keygen command. Easy.

THEN, I got to do this next step 5 times for the 5 other computers presently in the cluster… Easy but tedious.

Now, we have created the key pair in the client system. Now, copy the SSH public key to the remote system that you want to access via SSH.

In my case, I want to access my remote system which is running with CentOS. So, I ran the following command to copy my public key to the remote system:

ssh-copy-id ostechnix@192.168.43.150

As my login was “chiefio” and my machines are in a 10.x.x.x network, I got to do something more like:

ssh-copy-id chiefio@10.168.168.141
ssh-copy-id chiefio@10.168.168.142
ssh-copy-id chiefio@10.168.168.143
ssh-copy-id chiefio@10.168.168.144
ssh-copy-id chiefio@10.168.168.145

All from the headend machine.

I could likely have put that in a script-lette but it was a one-off so I just did it long hand. I could likely have also used the hostnames, but just followed the model shamelessly out of sloth.

That, then, let me do a password free login from headend to all the headless nodes over SSH. Oh Joy. This time for sure!

Well, I launched the parallel system version of my load command and it showed work blips show up on the various machines as they prepared to be infinitely loaded forever, and then it was done. WT?… There were also a fair number of error messages. First try was to just put a copy of “load” in my home directory/bin on each machine. Figuring if it logs in as me, it will be in the search path. Nope. Still telling me “cannot find command load” and just ending.

Well, cutting to the end, it’s got ‘fiddly bits’ that need a good fiddle. Probably to protect you from stray programs of the same name but different content, or perhaps to make it ‘easier’ so you never have to install things on the headless nodes, the parallel command has its own way of distributing the program / script to be run. I also got some nags about SSH wanting to have the max connections made larger (that I’ve not done yet) and some about Python not being happy at “locale” being unset (so it uses the default that I want anyway) and some more housekeeping clean up to do. But it DOES run and it DOES work.

The “feel the other systems and set up to distribute work” seems to take a few seconds, so anything needing less than 30 seconds to run on your local box probably will not benefit from a cluster command. Use this only for things that take long enough locally to bother you enough to write and debug a command / script.

Here’s the final result:

ls ${1-d*} | parallel -j0 --eta -S:,h1,h2,h3,h4,h5 --basefile /home/chiefio/bin/load '/home/chiefio/bin/load' 

Again, I’m feeding it a set of bogus input just to stimulate a number of jobs being farmed out. This goes into the pipe into parallel with -j0 for jobs = cores on any given computer. The –eta to give us a useless eta on something that will never end, then the interesting bit. The -S lists the systems to run this command upon. The “:” is special and means “localhost” (in this case ‘headend’ machine) then a comma separated list of other computers. For me, headless1 has an alias of h1, and headless2 an alias of h2, etc. So I’ve listed my 5 headless nodes.

Now, the “trick”, is you must tell it where your “script” or command is located. That –basefile that points at my bin directory and the command named “load”. That –basefile option can also share data files around too. Next, in single quotes, you put the actual command and arguments to execute. As load takes no arguments, it’s the same as the –basefile listing, but in single quotes. I’ve not yet gotten into the whole argument passing thing. That will be for tomorrow and making a parallel systems version of gzip and gunzip.

For now, this proves I’ve got the cluster running parallel scripts and paves the way for me to do things like a set of parallel FORTRAN compiles. While there is a dedicated commend for parallel C compiles (“distcc”) I’ve not found an equal for FORTRAN. I have distcc on each node of the cluster, so parallel C complies are ready to go (but having different libraries might be an issue – identical systems, releases, and libraries are best). Hopefully FORTRAN will be a bit forgiving on that front. We’ll see (eventually).

To the extent that parallel FORTRAN compiles require the same library and OS images, I can just use the 4 boards ( 16 cores) that are all Devuan matched R.Pi systems Leaving the Odroid C1 and Orange Pi One out of it. At least until a properly matched Devuan is available for them. Just changing the -S system list is all it takes.

Oh, here’s the result of running my ps_load command and loading up all 6 systems. A couple only had 3 cores in use, but that’s because I had more cores than files starting with the letter d ;-)

chiefio@DevuanPiM2:~/pp$ ps_load
parallel: Warning: ssh to h2 only allows for 15 simultaneous logins.
You may raise this by changing /etc/ssh/sshd_config:MaxStartup on h2.
Using only 14 connections to avoid race conditions.
parallel: Warning: ssh to h4 only allows for 17 simultaneous logins.
You may raise this by changing /etc/ssh/sshd_config:MaxStartup on h4.
Using only 16 connections to avoid race conditions.
parallel: Warning: ssh to h3 only allows for 14 simultaneous logins.
You may raise this by changing /etc/ssh/sshd_config:MaxStartup on h3.
Using only 13 connections to avoid race conditions.
parallel: Warning: ssh to h1 only allows for 17 simultaneous logins.
You may raise this by changing /etc/ssh/sshd_config:MaxStartup on h1.
Using only 16 connections to avoid race conditions.
parallel: Warning: ssh to h5 only allows for 16 simultaneous logins.
You may raise this by changing /etc/ssh/sshd_config:MaxStartup on h5.
Using only 15 connections to avoid race conditions.

So I need to go to each of those headless systems and raise the SSH logins limit, or set parallel to ask for fewer. It still runs, just nags you.

Warning: Permanently added 'h4,10.168.168.144' (ECDSA) to the list of known hosts.
Warning: Permanently added 'h1,10.168.168.141' (ED25519) to the list of known hosts.
Warning: Permanently added 'h5,10.168.168.145' (ECDSA) to the list of known hosts.
Warning: Permanently added 'h2,10.168.168.142' (ED25519) to the list of known hosts.
Warning: Permanently added 'h3,10.168.168.143' (ECDSA) to the list of known hosts.
Warning: Permanently added 'h1,10.168.168.141' (ED25519) to the list of known hosts.

Since I’m frequently changing host boards, I set the file where this “permanent” list is kept to be a link to /dev/null, so it nags me about this each time, but I don’t have to remove old identities to let new ones be added. IF the cluster stabilizes, I’ll put it back to normal and these will go away.

perl: warning: Setting locale failed.
perl: warning: Please check that your locale settings:
	LANGUAGE = (unset),
	LC_ALL = (unset),
	LANG = "en_US.UTF-8"
    are supported and installed on your system.
perl: warning: Falling back to the standard locale ("C").
Warning: Permanently added 'h2,10.16.16.42' (ED25519) to the list of known hosts.
perl: warning: Setting locale failed.
perl: warning: Please check that your locale settings:
	LANGUAGE = (unset),
	LC_ALL = (unset),
	LANG = "en_US.UTF-8"
    are supported and installed on your system.
perl: warning: Falling back to the standard locale ("C").

So Perl on a couple of the boards needs to have ‘locale’ fixed. Maybe someday… I did it on a couple of others. Really a stupid Perl behaviour.

Warning: Permanently added 'h3,10.168.168.143' (ECDSA) to the list of known hosts.
Warning: Permanently added 'h4,10.168.168.144' (ECDSA) to the list of known hosts.
Warning: Permanently added 'h5,10.168.168.145' (ECDSA) to the list of known hosts.

Computers / CPU cores / Max jobs to run
1:local / 4 / 20
2:h1 / 4 / 16
3:h2 / 4 / 14
4:h3 / 4 / 13
5:h4 / 4 / 16
6:h5 / 5 / 15

Computer:jobs running/jobs completed/%of started jobs

And there you can see the 6 computers, each with 4 cores, and what max number of jobs each can get. I don’t know why h5 claims to have 5 cores. It’s an Orange Pi One, with Armbian, so maybe it’s quirky… Ran fine in any case, just might get assigned more work units than it ought to get.

In Conclusion

The way distributing scripts / commands is handled is a bit obtuse. Passing parameters and file names is a PITA (partly as they have 3 or 4 ways you can do it to try pleasing everyone and that just makes it 3 or 4 times as much to learn and 9 to 16 times as many disconnects between the way you read and what the example shows… Oh Well.)

The good bit is I now have model parallel scripted commands for both machine local and cluster distributed execution, and they work. Now I can just, one thing I do at a time, model off them to make a local parallel or cluster parallel script to distribute the work, as appropriate. I’ll need to learn some more options as I do that, but those will be “as needed / where needed”.

Over time, as I run into things that are slow, I can spend time while they run making a parallel scripted version of them. For now, gzip, gunzip, xz, unxz, perhaps some data move / preen tasks, and making FORTRAN modules in a distributed way. Certainly testing them on different data sets. I also need to integrate a Real Disk NFS data store for cluster shared data. That can cut down on some of the headend communicating data copies to the headless nodes, but at the cost of an NFS server bottleneck. I also need to find the powersupply for my 100 Mb network switch, since runnig the cluster through the old crusty 10 Mb hub it is on now will certainly make “network” the hang-up point.

But now that I’ve got things for it to actually DO, I’ll be motivated to fix those things as they give me a bit of a pinch ;-)

The other thing I’ll end up doing is building more cluster control / management tools. Doing a log-in to 6 systems every time you want to shut it down will get old quick ;-) Not bad when playing with 2 to 4 boards at a time, but a PITA as a normal operations process.

With that, I’m signing off for the evening. I’ve got a cluster that works. I can distribute just about any Linux command or script to 24 cores as it stands and another 12 if I bother to configure them too. Now it’s just putting it to work. One script and one task at a time…

Subscribe to feed

Advertisements
Posted in Tech Bits | Tagged , , , | 14 Comments

Stock Prices Pause At SMA

I’ll add some charts later.

Right now, just a note that the price recovery in the stock markets has “paused” at the Simple Moving Average stack.

This is a “battle ground” behaviour as the longs and shorts fight for dominance.

At present, it looks like no upside “punch through” happening, and that typically means a “fall away” to the downside.

We’re at least 3 days into it, which is a bit odd (though not unheard of). Usually a ‘punch through’ happens fast at the pause at the SMA stack (sometimes only one day); it is usually just a couple of daily ticks before it falls again if going down. Then again, we had a holiday and they change how folks behave (major money players will sometimes be taking extra days off and delaying their participation).

So at this point, it is looking more like a failed recovery at the SMA stack, but the indicators are a bit mixed still. Any major drop day will be confirmation of the failed recovery. Only a hot streak up through the SMA stack will mean a return to the bull run market.

IMHO, it’s about 60% odds of a return to the downside, and 40% odds of an upside breakout. Every day spent stuck here or dropping increases the odds of a renewed drop. Only a clear break above the SMA stack will mean a return to the rising market and further bull market action.

Subscribe to feed

Posted in Uncategorized | Tagged , , | 10 Comments

Sinabung Bang! Bye Bye…

https://www.iceagenow.info/powerful-eruption-completely-annihilated-mount-singabungs-peak/

Propels ash more than 4 miles (7 km) into the sky and blows away much of the mountain’s summit. Includes video.

Indonesia’s Center for Volcanology and Geological Hazard Mitigation (PVMBG) says the explosion “completely annihilated” the mountain’s peak, it’s ‘lava dome.’

Images released by PVMBG show what the top of the volcano, with more than a million cubic meters shaved off, looks like. Text on top of frame reads “Before Feb. 19, 2018” and text on bottom reads “After Feb. 19, 2018.

Picktures, videos, and links in the original so “hit the link”…

Thanks to Jerry Duff and Keith Connelly for these links

“This is a major eruption,” says Jerry. “Much of Asia and Malaysia will be affected. Crop loss and possibly no summer. The ash is above both the stratosphere and troposphere. I cannot stress how serious this eruption is.”

Now to me it just looks like one mountain peak. Not seeing how this is going to wipe out Asian crops or end Summer. Then again, I’ve no idea how big Sinabung is, nor how that stacks up to others. Given that, let’s look at an eruption I remember.

https://en.wikipedia.org/wiki/1980_eruption_of_Mount_St._Helens

During the nine hours of vigorous eruptive activity, about 540,000,000 tons of ash fell over an area of more than 22,000 square miles (57,000 km2). The total volume of the ash before its compaction by rainfall was about 0.3 cubic miles (1.3 km3). The volume of the uncompacted ash is equivalent to about 0.05 cubic miles (210,000,000 m3) of solid rock, or about 7% of the amount of material that slid off in the debris avalanche. By around 5:30 p.m. on May 18, the vertical ash column declined in stature, but less severe outbursts continued through the next several days.

So 540 Mega-tons of ash, about 1/3 cubic mile. 57,000 km^2
Or about 5/100 cubic miles of rock.
210 mega-m^3

Compare Sinabung using numbers in the quote. 1,000,000 km^2 of rock gone.
1 mega-m^3.

Somehow I’m not seeing how that is earth shaking. Maybe the numbers are wrong?

https://en.wikipedia.org/wiki/Mount_Sinabung

February 2018

An epic gigantic eruption took place on 19 February 2018 with large volumes of ash and debris shot upwards to thousands of feet into the sky. The country’s National Disaster Mitigation Agency has confirmed that there were currently no fatalities or injuries. The eruption expelled at least 1.6 million cubic meters of material from the mountain’s summit.

So they say 1.6 mega-m^3 of rock. OK, it’s a big eruption, but if I’ve got the numbers right, much less than Mt. St. Helens. So am I missing something here?

Subscribe to feed

Posted in Earth Sciences, News Related | Tagged , , | 5 Comments

Barberry Berberine Diabetes & Fat Mice

I followed this chain from the Fat Mice study to end up on Barberry but I’m going to present it in reverse order.

The basic “story line” is that a connection has been found between gut bacteria, a particular “receptor” in the gut, and stimulation of particular gene expression, such that “metabolic syndrome” can be modulated via gut bacteria modulation. That, then, sent me looking at the things causing gene expression and that ended up at folk treatments for things like Diabetes using Barberry. Essentially it shows how barberry works by modulating those same gene expressing pathways and justifies the herbal medicine claims (to some extent).

I’d assert that this also explains why the “Fasting Cure” for Diabetes could also work. Fasting not only stresses the person fasting, it starves the bacterial and fungal populations of the gut and will cause significant shifts in them.

So, Barberry. It has a substance in it called Berberine. Also found in several other plants (including the California Poppy though they are a protected plant so don’t go eating them in public ;-)

https://en.wikipedia.org/wiki/Berberis

Berberis (/ˈbɜːrbərɪs/), commonly known as barberry, is a large genus of deciduous and evergreen shrubs from 1–5 m (3.3–16.4 ft) tall, found throughout temperate and subtropical regions of the world (apart from Australia). Species diversity is greatest in South America and Asia; Europe, Africa and North America have native species as well. The best-known Berberis species is the European barberry, Berberis vulgaris, which is common in Europe, North Africa, the Middle East, and central Asia, and has been widely introduced in North America. Many of the species have spines on the shoots and along the margins of the leaves.

So a very common plant found all over the place. Could it be that selling something growing like a weed is hard to do but selling all sorts of exotic drugs and diet plans makes a lot of money? I’m sure it was just an oversight that this plant might work… /sarc;

https://en.wikipedia.org/wiki/Berberine

Berberine is a quaternary ammonium salt from the protoberberine group of benzylisoquinoline alkaloids found in such plants as Berberis (e.g. Berberis vulgaris – barberry, Berberis aristata – tree turmeric, Mahonia aquifolium – Oregon-grape, Hydrastis canadensis – goldenseal, Xanthorhiza simplicissima – yellowroot, Phellodendron amurense – Amur cork tree, Coptis chinensis – Chinese goldthread, Tinospora cordifolia, Argemone mexicana – prickly poppy, and Eschscholzia californica – Californian poppy. Berberine is usually found in the roots, rhizomes, stems, and bark.

Due to berberine’s strong yellow color, Berberis species were used to dye wool, leather, and wood. Wool is still dyed with berberine today in northern India. Under ultraviolet light, berberine shows a strong yellow fluorescence, so it is used in histology for staining heparin in mast cells. As a natural dye, berberine has a color index of 75160.
[…]
Culinary uses

Berberis vulgaris grows in the wild in much of Europe and West Asia. It produces large crops of edible berries, rich in vitamin C, but with a sharp acid flavour. In Europe for many centuries the berries were used for culinary purposes in ways comparable to how citrus peel might be used. Today in Europe they are very infrequently used. The country in which they are used the most, is Iran where they are referred to as “Zereshk” (زرشک) in Persian. The berries are common in Iranian (Persian) cuisine such as in rice pilafs (known as “Zereshk Polo”) and as a flavouring for poultry meat. Due to their inherent sour flavor, they are sometimes cooked with sugar before being added to Persian rice. Iranian markets sell Zereshk dried. In Russia they are sometimes used in jams (especially the mixed berry ones) and extract from them is a common flavouring for soft drinks and candies/sweets.

Berberis microphylla and B. darwinii (both known as calafate and michay) are two species found in Patagonia in Argentina and Chile. Their edible purple fruits are used for jams and infusions. The calafate and michay are symbols of Patagonia.

Traditional medicine

The dried fruit of Berberis vulgaris is used in herbal medicine. The chemical constituents include isoquinolone alkaloids, especially berberine. One study reports that it is superior to metformin in treating polycystic ovary syndrome.

Folk medicine

Berberine was supposedly used in China as a folk medicine by Shennong around 3000 BC. This first recorded use of berberine is described in the ancient Chinese medical book The Divine Farmer’s Herb-Root Classic.

So not only might one find it in “Health Food Stores”, but in Iranian Markets and perhaps a Hispanic food store with a Chilean or Argentinian focus.

Then “Traditional Medicine” used it and it might also be found at a Chinese Herbalist store.

So not exactly hard to find.

But no worries. Researchers have discovered it now… only a few thousand years later…

Research

Berberine is under investigation to determine whether it may have applications for treating arrhythmia, diabetes, hyperlipidemia, and cancer. Berberine exerts class III antiarrhythmic action. There is some evidence that berberine may have anti-aging (gero-suppressive) properties. Berberine is already being used as an ‘Insuin Sensitizer’ which is able to provide better glycaemic control in most of the users [Only upon prescription of a qualified physician]. In live cells, berberine localizes in mitochondria. Its mitochondrial localization is consistent with inhibition of complex I of respiratory chain, decrease of ATP production, and subsequent activation of AMPK, which leads to suppression of mTOR signaling. The bioavailability of berberine is low.[

Some research has been undertaken into possible use against methicillin-resistant Staphylococcus aureus (MRSA) infection. Berberine is considered antibiotic. When applied in vitro and in combination with methoxyhydnocarpin, an inhibitor of multidrug resistance pumps, berberine inhibits growth of Staphylococcus aureus and Microcystis aeruginosa, a toxic cyanobacterium.

Well. Guess I need to go on a shopping trip to strange stores looking for Barberry…

I got to Barberry by looking up one of the “receptors” that was involved in the Fat Mice Study.

https://en.wikipedia.org/wiki/Peroxisome_proliferator-activated_receptor

In the field of molecular biology, the peroxisome proliferator-activated receptors (PPARs) are a group of nuclear receptor proteins that function as transcription factors regulating the expression of genes. PPARs play essential roles in the regulation of cellular differentiation, development, and metabolism (carbohydrate, lipid, protein), and tumorigenesis of higher organisms.

Significantly involved in fat metabolism (of a couple of kinds) along with other functions, those receptors are important in what you do with your food and fat intake. Regulating them ought to have effects on things like diabetes and obesity, so no surprises here.

Hey, looks like the Drug Companies have even discovered it! I’ve bolded the bit that sent me off to Barberry land.

Pharmacology and PPAR modulators

Main article: PPAR modulator

PPARα and PPARγ are the molecular targets of a number of marketed drugs. For instance the hypolipidemic fibrates activate PPARα, and the anti diabetic thiazolidinediones activate PPARγ. The synthetic chemical perfluorooctanoic acid activates PPARα while the synthetic perfluorononanoic acid activates both PPARα and PPARγ. Berberine activates PPARγ, as well as other natural compounds from different chemical classes.

But why eat a bowl of berrys when you could be put on a drug regime of thiazolidinediones and perflourononanoic acid…

But about those Fat Mice…

https://www.sciencedaily.com/releases/2018/02/180212100618.htm

Goes into great detail about how they made some genetically altered mice to see what the particular “receptors” did in terms of making folks (er, mice…) fat. Then discovers that the particular bacteria in your gut have figured out how to make you eat more and get fatter…

Mouse study adds to evidence linking gut bacteria and obesity

Date:
February 12, 2018
Source:
Johns Hopkins Medicine
Summary:
A new study of mice with the rodent equivalent of metabolic syndrome has added to evidence that the intestinal microbiome — a ‘garden’ of bacterial, viral and fungal genes — plays a substantial role in the development of obesity and insulin resistance in mammals, including humans.

A new Johns Hopkins study of mice with the rodent equivalent of metabolic syndrome has added to evidence that the intestinal microbiome — a “garden” of bacterial, viral and fungal genes — plays a substantial role in the development of obesity and insulin resistance in mammals, including humans.

A report of the findings, published Jan. 24 in Mucosal Immunology, highlights the potential to prevent obesity and diabetes by manipulating levels and ratios of gut bacteria, and/or modifying the chemical and biological pathways for metabolism-activating genes.

“This study adds to our understanding of how bacteria may cause obesity, and we found particular types of bacteria in mice that were strongly linked to metabolic syndrome,” says David Hackam, M.D., Ph.D., surgeon-in-chief and co-director of Johns Hopkins Children’s Center and the study’s senior author. “With this new knowledge we can look for ways to control the responsible bacteria or related genes and hopefully prevent obesity in children and adults.”

Metabolic syndrome, a cluster of conditions including obesity around the waist, high blood sugar and increased blood pressure, is a risk factor for heart disease, stroke and diabetes. While no precise cause for metabolic syndrome is known, previous studies of Toll-like receptor 4 (TLR4), a protein that receives chemical signals to activate inflammation, have suggested that TLR4 may be responsible in part for its development.

Hmmm… connecting inflammation into the mix too. So they chopped it out of some mice guts, and they got obese on regular mouse chow rations. Then they played around with more of that (just chopping it out of the gut genome) and did more tests. Eventually deciding the gut bugs are involved too.

[…]
To investigate the role the bacterial makeup of the gut had on the mice, Hackam and his team then administered antibiotics to the normal and TLR4 intestinal epithelium-deficient mice. Antibiotics significantly reduced the amount of bacteria in the intestinal tract and prevented all symptoms of metabolic syndrome in the mice that lacked TLR4 in their intestinal epitheliums.

This demonstrates, the researchers say, that bacterial levels can be manipulated to prevent the development of metabolic syndrome.

To further explore the role of intestinal epithelial TLR4 on the development of metabolic syndrome, the research team analyzed fecal samples from the TLR4 intestinal epithelium-deficient and normal mice. The team found that specific clusters of bacteria that contribute to the development of metabolic syndrome were expressed differently in the deficient mice than in normal mice. They also determined that the bacteria expressed genes that made them “less hungry” and thus less able to digest the nutrients present in the mouse chow. This resulted in a greater abundance of food for the mouse to absorb, which contributed to obesity.

I question just how much ‘less hungry bacteria’ was the actual mechanism as opposed to direct modification of gene expression in the fat metabolism path of the host… but moving on…

The researchers then analyzed the genes expressed in the lining of the intestinal mucosa — the site at which food absorption occurs — in normal and TLR4 intestinal epithelium-deficient mice. Of note, the team determined that important genes in the perixisome proliferator-activated receptor (PPAR) metabolic pathway were significantly suppressed in the deficient mice. Administering antibiotics prevented the differences in gene regulation between the two groups of mice, as did administering drugs to activate the PPAR signaling pathway, further explaining the reasons for which obesity developed.

“All of our experiments imply that the bacterial sensor TLR4 regulates both host and bacterial genes that play previously unrecognized roles in energy metabolism leading to the development of metabolic syndrome in mice,” says Hackam.

Gee… a direct mode of action for BOTH Barberry and simple fasting to be effective treatments for diabetes, obesity, metabolic syndrom, and who knows what all else.

But I’m sure the Drug Industry will find a nice pill you can take to monetize this instead.

Subscribe to feed

Posted in Biology Biochem | Tagged , , , , , | 6 Comments