Tuesday 17 June 2014

Contra Networks

Contra networks

Olaf Sporns, "Contributions and challenges for network models in cognitive neuroscience"

Nature Neuroscience 17:652-60 (2014)

This paper might be one of the most stimulating I have read recently - mainly because I agree with very little of it! In summarry he reviews studies that have anaylsed the connections between brain areas (histological, diffusion tensor, imaging and functional connectivity) using network models. Network theory is, in my eyes, just a posh way of getting numbers that characterise a particular way of connecting nodes together. Sporns offers to take a critical look at some of the advantages and limitations of network models in cognitive neuroscience. Unsurprisingly he has a very positive take.

Comments

  1. You are eating your tail. Typical hollow and meaningless conclusions are:

    • Network models offer a theoretical framework that encompasses both local and global processes and thus resolves the long standing conflict between localised and distributed processing.
    • By addressing how connectivity mediates both segregation and integration, network approaches not only reconcile these seemingly opposing perspectives, they also sugget that their coexistence is fundamental for brain function.
    • Integration in large-scale brain networks is supported by a set of specialised brain regions that transiently orchestrate interactions between functional modules
    • Integrative processes have a distinct anatomical substrate
    • There are robust relationships between network architecture and functional specialization

    Basically, in each case, the conclusion is foreborne by the premises. If you start by delineating modules, calculating connectivity, and proximity measures, these conclusions say nothing more than "I am using this method". They explain nothing.

    Just as an example, let's take the first one - that networks might allow us to resolve the
    global-local processing problem. The argument appears to be like this:
    • Premise 1: "global network meausures capture important characteristics related to overall signaling capacity"
      (note that this is itself a rather fanciful comment for functional networks -- even if I accept the doctrine of networks! In real computer networks, signalling capacity is the rate of transmission - mainly a function of compression at encoding, and how fast a channel can change its signal) 
    • Premise 2: "most RSNs are associated with specific behavioural or cognitive domains... network communities corresponding to specific domains were found to be associated with distinct functional fingerprints" - OK
    • Conclusion: we have found numbers characterising networks at local and global scales, so we have solved the dilemma of whether computation is done at local or global scales.
     Hopefully you can see that all the arguments above are of this form:
     "I can measure new numbers meaning X - so this explains how the brain works by using X"

     In the same vein, functional networks that show changes in such global/local parameters
     say nothing more than: "global/local correlations change over time, so this solves dilemmas we had about how global/local processing coexist".

    (Incidentally, if you believe in these functional changes over time, they invalidate any conclusions you have made about structural networks!)
  2. Is it surprising that functional connectivity parallels structural connectivity? Could it be any other way?
  3. Quantitative is overrated. Yes, it allows applications of quantitative measures of network topology. But to what end? We don't yet have even a qualitative theory of how the cognition is done. Perhaps we should start working on that?
  4. Claims that
    • it allows identification of structural network elements that are specialised for carrying out integrative function.; and
    • quantitative approaches provide information on regional roles in neural processing within and across communities.
    I.e. Betweenness (Centrality, Degree) are markers of a node having integrative function. Why? couldn't we just be looking at a relay station? The thalamus might be a relay station (I don't think it is, of course) yet still be at the centre of network topology - with no integrative function whatsoever.
  5. Areas which are highly functionally connected are unlikely to be computing anything interesting. In fact, if the activity of two areas is highly correlated, it suggests that they represent information in similar ways, and thus no real computation has been performed. If the frontoparietal module correlates with visual and tactile networks, that means that it encodes the two types of information (visual and tactile) in parallel. I.e. frontoparietal activity is a linear combination of visual and tactile information, and thus no computation has been performed, only superposition.

    In fact, shouldn't computation be defined as occurring just when areas are structurally connected but no functional correlation is seen?
  6. what are the fundamental rules and principles of network science? They are either tautologous (trivially true) or not based on the kind of information processing the brain does. No conclusion is going to be relevant to the neural basis of cognition if you neglect:
    1. the direction of connections,
    2. inhibitiory vs excitatory,
    3. preservation / distortion / manipulation of representations,
    4. cortical lamination...
    Even if you add these "instatiation properties" to network theory, what does the network theory itself add?
  7. You are going to tell me that, in general, areas that are nearby each other in the brain are more likely to connect together. What is surely more interesting is the cases in which that rule is broken, i.e. where the greatest deviation from standard white-matter-distance occurs?
    In these situations, connectivity might be informative, because there must be a reason behind connections that are are not predictable from anatomical location.
  8. Do you really believe that the kinds of processes that generate human thought, understanding, belief, reasoning etc. can be even coarsely described in terms of the topology of the network? I'd say certainly not, at least with anything resembling the kind of network we're talking about today.
On the other hand, in favour of networks,
  1. MEG and connectivity might help understand short term plasticity.
  2. It is still possible that network statistics of a "node" might help understand why lesions to some brain areas cause more symptoms than others. One might argue that a node with high centrality would cause more deficits than a node with low centrality. But in drawing this conclusion, you make certain silent assumptions. In particular, you are suggesting that lesions that fail to disconnect two regions, because there are other (indirect) routes between the two regions, then this redundancy of connectivity allows "information flow" "around" the lesion. First, it is not clear that connections in a network represent information flow, even if they were directional. Functionally derived networks, in particular, notoriously connect nodes which are simply driven by a common source. Second, it seems incoherent to suppose that nodes are performing computations, if you then speak of a damaged node being replaced by a chain of nodes that effectively "conduct around" it. Third, if we bite the bullet and suggest that some connections really do pass right through some nodes, e.g. if white matter and grey matter were not fully distinguished (as is likely to be the case in the striatum), then it is entirely unsuprising that a lesion would affect distant areas - this is just a glamorisation of the age-old phenomenon of diaschisis, and needs no network statistics to explain it.
  3. ?
In conclusion:  

Somewhere, the vague and heterogeneous metaphors of electrical conduction, internet servers, ethernet hubs, logic gates, connectionist synaptic weights and telephone switchboards have become muddled and confused, to the point where teasing out meaningful conclusions seems futile.

Tuesday 10 June 2014

Discarding function outputs

I wish I'd known MATLAB had... [~,Y]=F()

  • If you don't want the first return value from a function, but you want the second one, you can discard the first one with ~.
  • The main advantage is it doesn't clutter your workspace with dummy variables like temp, z, or whatever you use for unwanted variables.
  • Common examples might include:
    [~,E]=eig(X)      if you want the eigenvectors but not the eigenvalues
    [~,~,kcode] = KbCheck
            
    in psychtoolbox to get just the key codes of currently depressed keys

Unpacking colons to indices


I wish I'd known MATLAB had... X(':')

  • The colon character can be used as an index, to mean a whole row/column/slice etc. But you can use the quoted character too!
  • Why would this ever be useful?  When using cell-expansion indexing.

Example:

Let us say that you have a line that extracts a single vector from a matrix, but sometimes you need the first row, and other times you need the first column.
i.e.. sometimes you want Y=X(1,:), and other times you want Y=X(:,1).
  •   Can this be done without an IF statement?
  •   Yes:
         indices = {':',1};
         indices = fliplr(indices);
         Y=X(indices{:})
  • This works because indices{:} expands to a comma-separated list, in this case 1 and ':'.
  • Expansion of cell arrays to these lists can be used as input to functions, or indices of an array.
  • This is also how varargin{:} works.  

Signed angles

I wish I'd known MATLAB had... ATAN2

  • ATAN2 is the arctangent, but takes X and Y separately. It's not quite the same as ATAN(X/Y).
  • The problem with normal ATAN is that, because you are calculating X/Y, the function can't distinguish beween when both X and Y are positive, and when both X and Y are negative! So when an angle is returned, its range is only pi, not 2*pi. ATAN2 solves this problem.
  • ATAN2(X,Y) gives the angle in radians of a point (x,y) from the origin. The angle is measured from the horizontal line going rightwards, i.e. the positive x-axis. Positive angles go anticlockwise, and negative angles go clockwise from this line, pi and -pi both indicate points on the negative x-axis going leftwards, i.e. x is negative and y is 0.
  • ATAN2 is great because it is signed (+/-)! so you don't lose information.
  • Note that you can also do this easily with complex numbers:
   ATAN2(X,Y) is the same as   angle(X+1j*Y)

Thursday 5 June 2014

MATLAB Unwanted text output

How to find a line that is producing unwanted text output in the console?


  • If you have Matlab R12, load the .m file in the editor, and look in the right-hand margin.
  • There should be orange markers for parser warnings. 
  • If you hover over items, any statements that produce unspecified text output will be marked as "Terminate statement with semicolon to suppress output", and a "Fix" button!


Getting the sort ordering


I wish I'd known MATLAB had... [Y,I] = SORT(X)

  • Sort has two outputs! The second output is the indices of the elements, i.e. the re-ordering vector
  • Y is the same as X(I)
  • Very handy if you want to sort a matrix by one column [~,I] = SORT(M(:,1)); % sort M by its first column Y = M(I,:);

Pairwise differences

I wish I'd known MATLAB had... DIFF

  • DIFF calculates the sequential differences between numbers in an array. 
  • It can work on any dimension along the array: Y = DIFF(X,2) calculates differences along the rows.
  • For vectors, you can do a diff in different ways - e.g.
        Y = DIFF(X)   is similar to
        Y = [X nan]-[nan X] (except it has an 2 extra columns)
  • Warning - DIFF(X) always has one fewer rows (or columns etc. depeding on dimension) than X! I often pad the diff with an extra row:
       Y = [ nan(1,size(X,2))
             DIFF(X)            ]
  • Also note that there is a function GRADIENT that does something similar. In particular, the gradient at a point is calculated using the DIFF of the points on either side of it. So,
        Y=GRADIENT(X)        if X is a vector, is similar to
        Y=MEAN([ [nan;GRADIENT(X)], [GRADIENT(X),nan] ])
    except at the first and last points (thanks Sean Fallon for this info!).