<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	
	>
<channel>
	<title>Comments on: When Less is More</title>
	<atom:link href="http://amethystinformatics.co.uk/blog/2015/06/30/when-less-is-more/feed/" rel="self" type="application/rss+xml" />
	<link>http://amethystinformatics.co.uk/blog/2015/06/30/when-less-is-more/</link>
	<description>Crystallising your data</description>
	<lastBuildDate>Wed, 02 Dec 2015 07:18:00 +0000</lastBuildDate>
	<sy:updatePeriod>hourly</sy:updatePeriod>
	<sy:updateFrequency>1</sy:updateFrequency>
	<generator>https://wordpress.org/?v=4.2.38</generator>
	<item>
		<title>By: molmod</title>
		<link>http://amethystinformatics.co.uk/blog/2015/06/30/when-less-is-more/#comment-127</link>
		<dc:creator><![CDATA[molmod]]></dc:creator>
		<pubDate>Wed, 15 Jul 2015 13:28:31 +0000</pubDate>
		<guid isPermaLink="false">http://amethystinformatics.co.uk/blog/?p=193#comment-127</guid>
		<description><![CDATA[That&#039;s definitely the case : an interpretable model can be much more useful.  Linear regression models certainly win in that sense over NNs.  However, sometimes the descriptors that go into such a model have an obscure relationship with molecular structure and give no real insight into how to optimise the activity of interest.  I don&#039;t think I&#039;d dare ask a chemist to reduce his Balaban index for example.

I love the images generated by the Google NNs in your link.  Particularly the one that sees animals in the pictures of clouds.]]></description>
		<content:encoded><![CDATA[<p>That&#8217;s definitely the case : an interpretable model can be much more useful.  Linear regression models certainly win in that sense over NNs.  However, sometimes the descriptors that go into such a model have an obscure relationship with molecular structure and give no real insight into how to optimise the activity of interest.  I don&#8217;t think I&#8217;d dare ask a chemist to reduce his Balaban index for example.</p>
<p>I love the images generated by the Google NNs in your link.  Particularly the one that sees animals in the pictures of clouds.</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Amethyst</title>
		<link>http://amethystinformatics.co.uk/blog/2015/06/30/when-less-is-more/#comment-118</link>
		<dc:creator><![CDATA[Amethyst]]></dc:creator>
		<pubDate>Wed, 08 Jul 2015 16:56:40 +0000</pubDate>
		<guid isPermaLink="false">http://amethystinformatics.co.uk/blog/?p=193#comment-118</guid>
		<description><![CDATA[Hey Molmod, I like your &quot;less is more&quot; neural net thoughts.  Do you think that the interpretability of a model might also be a factor in the choice of method used?  I have found that some chemistry clients favour models that give them a meaningful reason to why a compound is being predicted as active.

I have just stumbled across this &lt;a target=&quot;_blank&quot; href=&quot;http://googleresearch.blogspot.co.uk/2015/06/inceptionism-going-deeper-into-neural.html&quot; rel=&quot;nofollow&quot;&gt;interesting blog article&lt;/a&gt; published last month about neural networks in image recognition.  Has some really eye catching images of what you get when you take a peek inside a neural network.  For example take a look at the magical animals that are found in the sky.]]></description>
		<content:encoded><![CDATA[<p>Hey Molmod, I like your &#8220;less is more&#8221; neural net thoughts.  Do you think that the interpretability of a model might also be a factor in the choice of method used?  I have found that some chemistry clients favour models that give them a meaningful reason to why a compound is being predicted as active.</p>
<p>I have just stumbled across this <a target="_blank" href="http://googleresearch.blogspot.co.uk/2015/06/inceptionism-going-deeper-into-neural.html" rel="nofollow">interesting blog article</a> published last month about neural networks in image recognition.  Has some really eye catching images of what you get when you take a peek inside a neural network.  For example take a look at the magical animals that are found in the sky.</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: molmod</title>
		<link>http://amethystinformatics.co.uk/blog/2015/06/30/when-less-is-more/#comment-114</link>
		<dc:creator><![CDATA[molmod]]></dc:creator>
		<pubDate>Sun, 05 Jul 2015 13:35:37 +0000</pubDate>
		<guid isPermaLink="false">http://amethystinformatics.co.uk/blog/?p=193#comment-114</guid>
		<description><![CDATA[Hi there Amethyst.  I remember this well when I was playing around with neural nets a lot.  Add too many neurons and it was great at telling you what you already knew but the predictions were hopeless.  The other interesting thing was that you could train them too much.  If you curtailed the training, the predictions were usually better than if if let the training run on longer, whereas the error in the test set just got smaller and smaller.  Definitely less is more there...
Neural nets seem to have fallen out of fashion these days (at least in chemistry applications) although there have been one or two more recent papers that may have got around some of these issues.  I wait to see if they rise again in popularity to the levels we saw in the &#039;90s.]]></description>
		<content:encoded><![CDATA[<p>Hi there Amethyst.  I remember this well when I was playing around with neural nets a lot.  Add too many neurons and it was great at telling you what you already knew but the predictions were hopeless.  The other interesting thing was that you could train them too much.  If you curtailed the training, the predictions were usually better than if if let the training run on longer, whereas the error in the test set just got smaller and smaller.  Definitely less is more there&#8230;<br />
Neural nets seem to have fallen out of fashion these days (at least in chemistry applications) although there have been one or two more recent papers that may have got around some of these issues.  I wait to see if they rise again in popularity to the levels we saw in the &#8217;90s.</p>
]]></content:encoded>
	</item>
</channel>
</rss>
