I had a talk to people from Globus. This is now using a subscription model to allow software sustainability although it is still part of the University of Chicago.

Its focus is now not on access to compute resources as of old but facilitating data transfers, handling credentials at each end, retrying transfers etc. The Tier 2 has had concerns about how data transfers will work across the increasingly complex data landscape – e.g. a researcher might be using cloud resources, local data resources, university HPC, their ‘home’ Tier 2, another Tier 2 via an EPSRC RAP call and a national facility. Does Globus offer a way forward on this?

Of course the old questions still arise: long term curation of data and ownership; data transfer and the impact of networking; move the data to a new location or compute where it is; ensuring that computation only starts when the data is present; ensuring the data is moved in a timely manner.