Destacats

Cada any, un comitè d'experts s'ha d'enfrontar a la difícil tasca d'escolllir, d'entre totes les publicacions ICREA, unes poques que destaquin sobre la resta. És tot un repte: de vegades els debats s'acaloren, i sempre són difícils, però acaba sortint-ne una llista de 24 publicacions. No es concedeix cap premi, i l'únic reconeixement addicional és l'honor d'ésser presentat com a Highlight. Cada publicació té alguna cosa especial, sia una solució especialment elegant a un vell problema, un resó espectacular als mitjans de comunicació o simplement, la fascinació d'una idea revolucionària. Independentment del motiu, es tracta dels millors dels millors i, com a tals, ens plau compartir-los aquí.

LIST OF SCIENTIFIC HIGHLIGHTS

Format: yyyy
  • Emerging concepts for the dynamical organization of resting-state activity in the brain (2011)

    Deco, Gustavo (UPF)

    view details
    CLOSE

    A broad body of experimental work has demonstrated that apparently spontaneous brain activity is not random. At the level of large-scale neural systems, as measured with functional MRI (fMRI), this ongoing activity reflects the organization of a series of highly coherent functional networks. These so-called resting-state networks (RSNs) closely relate to the underlying anatomical connectivity but cannot be understood in those terms alone. Here we review three large-scale neural system models of primate neocortex that emphasize the key contributions of local dynamics, signal transmission delays and noise to the emerging RSNs. We propose that the formation and dissolution of resting-state patterns reflects the exploration of possible functional network configurations around a stable anatomical skeleton.

  • Better Colleagues, Better Work? (2011)

    Eeckhout, Jan (UPF)

    view details
    CLOSE

    More able and skilled workers are more productive: by any measurable standard, they produce more and they earn higher wages. But are the same workers also more productive if the are surrounded by other more able and skilled co-workers? Casual observation of how much effort and resources are spent to put together the most appropriate teams of professionals seems to indicate the exact composition of collaborators matters a great deal. The most reputable hospitals vie for the best doctors, the best law firms make the best lawyers partner and top universities attract the best researchers. Yet, economist have found no evidence of such complementary effects economy-wide. Using social security data, wages are decomposed in a component due to the individual worker and one due to the characteristics of the firm environment. It turns out there is no significant correlation between these two components, and hence there is no evidence of those complementary effects.

    Jan Eeckhout (UPF) and Philipp Kircher (LSE) show that the lack of evidence is due to a flaw in the method. The main insight is that while working in a better firm does make all workers more productive, it does not necessarily increase wages. In fact, wages could be lower when getting a job in a better firm even if that worker is producing more output. This is due to mismatch, the fact that the better firm ideally would like to fill that position with a more skilled worker. For the firm to be willing to offer the job to one of lower ability, they will offer a lower wage. It is precisely this wage decrease that leads to failure to find evidence of the complementarities. They propose a method that takes this into account and permits for the correct inference from the data. Correctly measuring how strong those complementarities are is important to evaluate the losses from badly allocated teams of collaborators. And it is a key piece of information for policies that aim to achieve the optimal allocation of resources, for example unemployment insurance that provides incentives for the unemployed to look hard for the right job.

  • The art of magnetic writing (2011)

    Gambardella, Pietro (ICN2)

    view details
    CLOSE

    Computer files that allow us to watch videos, store pictures, and edit all kinds of media formats are nothing else but streams of "0" and "1" digital data, that is, bits and bytes. Modern computing technology is based on our ability to write, store, and retrieve digital information as efficiently as possible. In a computer hard disk, this is achieved in practice by writing information on a thin magnetic layer, where magnetic domains pointing "up" represent a "1" and magnetic domains pointing down represent a "0".

    The size of these magnetic domains has now reached a few tens of nanometers, allowing us to store a Terabyte of data in the space of just a few square centimeters. Miniaturization, however, has created numerous problems that physicists and engineers worldwide struggle to solve at the pace demanded by an ever-growing information technology industry. The process of writing information on tiny magnetic bits one by one, as fast as possible, and with little energy consumption, represents one of the biggest hurdles in this field.

    A team of scientists from the Catalan Institute of Nanotechnology, in collaboration with SPINTEC researchers in France, has discovered a new method to write magnetic data that meets all of these requirements. Magnetic writing is currently performed using magnetic fields produced by wires and coils, a methodology suffering severe limitations in scalability and energy efficiency. The new technique eliminates this and provides extremely simple and reversible writing of memory elements by injecting an electric current parallel to the magnetic layer. The key to this effect lies in engineering a magnetic cobalt film less than one nanometer thick sandwiched between platinum and aluminum oxide. The research team showed that using current pulses that last less than 10 ns produced magnetic switching reliably at room temperature, while further miniaturization and faster switching appear easily within reach.

    This work has interesting applications for the magnetic recording industry, in particular the development of magnetic random access memories (MRAMs). By replacing standard RAMs, which need to be refreshed every few milliseconds, non-volatile MRAMs would allow instant power up of a computer and also save a substantial amount of energy. An additional advantage of the technique is that it is more efficient in "hard" magnetic layers that can be more easily miniaturized to nanometer dimensions, resulting in significantly increased information storage density. Three patent applications de

  • Higgs Hunting at the LHC (2011)

    Juste, Aurelio (IFAE)
    Martínez Pérez, Mario (IFAE)

    view details
    CLOSE

    Since 2009, the Large Hadron Collider (LHC) at CERN (Geneva, Switzerland) collides protons at a center-of-mass energy of 7 TeV, the highest energy ever reached by a particle accelerator. One of the main goals of the LHC is the search for the Higgs boson, the last piece of the Standard Model that remains undiscovered. The so-called Higgs mechanism was introduced in 1964 to explain the breaking of the electroweak symmetry, leading to a massless photon, mediator of the electromagnetic force, and very heavy W and Z bosons, mediators of the weak interaction. The Higgs mechanism would also explain the mass of fermions as the Higgs field permeates the Universe and interacts with all particles endowing them with their mass. Finally, the existence of a Higgs boson associated with the Higgs field is postulated, although its mass is not predicted and must be determined experimentally. During the last four decades, particle physicists have searched for the Higgs boson. In the 90's, the LEP electron-positron collider at CERN concluded that the Higgs boson, if it exists, should have a mass larger than 114.4 GeV at 95% confidence level (C.L). Since 2002, the quest for the Higgs boson was mainly undertaken by the CDF and D0 experiments at the 1.96 TeV proton-antiproton Tevatron collider at Fermilab (near Chicago, USA). The experiments were able to extend the excluded mass range at 95% C.L. to 156-177 GeV, still leaving much room for the Higgs boson to hide. The analysis of the data delivered in 2011 by the LHC to the ATLAS and CMS experiments has translated into a huge step forward in the quest for the Higgs boson. In particular, the combination of searches at the ATLAS experiment has excluded at 95% C.L. the presence of a Higgs boson with mass in the ranges: 112.7-115.5 GeV, 131-237 GeV, and 251-453 GeV [1-4]. Most importantly, the experiment observes a suggestive excess of events around 126 GeV (see Fig 1) which would be consistent with the potential signal of a Higgs boson, although the observation is not yet statistically significant. It corresponds to a 3.6 standard deviation from the background-only hypothesis (see Fig 2) but, after including look-elsewhere effects,.. 

  • Why might a mutation kill one individual, but not another? (2011)

    Lehner, Ben (CRG)

    view details
    CLOSE

    Many people have `disease causing' mutations in their genomes but never actually develop a disease. Researchers from the Centre for Genomic Regulation (CRG) have studied how non-genetic variation in gene expression may help to predict whether an individual develops a disease or not.

    The effects of a particular disease causing mutation can be very severe in some individuals, only mild in others, and of no consequence in a lucky few. These differences in the outcome of mutations are usually ascribed to interactions with additional variation (i.e. to genetics) or to interactions with lifestyle, diet etc (i.e. to the environment). However it has long been realised by researchers who work with laboratory animals that even in the absence of genetic variation - and even in a highly controlled laboratory environment - the same mutation can kill one individual and yet have no apparent effect in another. What are the causes of this variation in the effects of inherited mutations?

    Researchers at the CRG have studied this question using the microscopic worm, Caenorhabditis elegans, as a model. Due to its simplicity, C. elegans is one of the most widely studied organisms in biology, and was the first animal to have its genome sequenced. Recently three different Nobel Prizes have been awarded for research using C. elegans. The researchers tested the idea that it is 'random' variation in gene expression (the extent to which a particular gene is turned on or off) during the development of the animal that influences the outcome of each mutation. They developed a method to quantify the activity of genes in developing embryos, allowing them to test the consequences of variation in the expression of particular genes. Using this approach they found that 'random' differences in the expression of a gene can sometimes have quite a big effect. Indeed, by quantifying variation in the expression of both a specific and a more general cellular component, it was possible to predict much more accurately what the consequences of a particular inherited mutation would be for a particular individual.

    The work illustrates how, even if we completely understand all of the genes important for a particular human disease, we may never be able to accurately predict what will happen to each person from their genome sequence alone. Rather, to develop personalised and predictive medicine it will also be necessary to consider the varying extent to which genes are turned on or off in each person.

  • Quantum Simulator of Frustrated Antiferromagnetism (2011)

    Lewenstein, Maciej Andrzej (ICFO)

    view details
    CLOSE

    Together with the experimental group of Klaus Sengstock in Hamburg we have proposed and realized the first quantum simulator (i.e. quantum computer of special purpose) of frustrated antiferromagnetism. Bose Einstein condensate loaded into an anisotropic triangular lattice undergoes various orderings with respect to the phase of the condensate wave function. In the strongly correlated regime the system should exhibit various exotic quantum states, such as a spin liquid state with non-trivial topological order.

Pages