Vijay Pande's Interview, Transcription

From FaHWiki
Jump to: navigation, search

Transcription of NoahJ's Interview With Vijay Pande

On January 14, 2005 Noah Johnson (NoahJ) interviewed Dr. Vijay Pande by telephone. The interview, based on a list of questions submitted to Dr. Pande earlier, ranged from general Folding topics to discussion of specific MacOS X questions. Below is a direct transcript of the interview (still a work in progress). Audio files of the interview in sizes from 1 MB to 12.7 MB, all MP3 files, are available by linking through the original interview announcement.

A different version of the transcript, edited for clarity and slightly condensed, can be found here.


This transcription is still a work-in-progress

Legend:

  NJ = NoahJ       
  VP = Vijay Pande      
  [something  (?)] = We're not sure if "something" is the correct word used in the interview.
  [ ... ??? ... ] = We cannot hear / understand what was said.

NJ

Let's go ahead and get started on this interview.

[I'm trying (?)] to keep it light so that we can make it enjoyable for new folders but I wanted to make sure we get some good information in here too so that people would be interested in reading it if they have been folding for a little while.

If there are questions that you can't answer for whatever reasons, just let me know and we'll just mark them off.

And we'll just, I'll just start at the top of the list that I sent you and we'll just go down from there.

So the first question was what in your words is the main goal of the whole folding@home project and what what milestones have you achieved toward that goal so far just in basic terms?


VP

Our primary goal is to understand the misfolding related diseases like Alzheimer's, Parkison's or Huntington's diseases and to use computation to do this actually is a very novel approach.

I think for decades one has dreamed about the idea of using computers to understand diseases and biology and to design drugs and actually for decades also people have sort of come up with methods which they hoped would be successful but really, it hasn't quite pan out as well as people would have like, and I think it's only recently now that the algorithms are getting good enough and the computation power is big enough that we can actually do this and it might seem kinda counter-intuitive but designing drugs and looking at how make systems and things like that, even though the systems are small in terms of their absolute skill, they are surprisingly complicated.

And so, even though people use computers to design other things that are complicated like, you know, like computer chips or like bridges or things like that, these molecular systems are yet even thousands and millions of times so more complex and that's where really, to do something truly accurate and really useful, one needs to develop new methods and then also even with these new methods still to have computational power that is thousands to millions of times greater than most researchers technically have.


NJ

So basically there hasn't been as much ground broken as you would hope but things are getting better with the new cor... like the new Amber core and stuff like that...


VP

No, actually I was talking about ,sort of, the history pre-Folding@Home, laying the foundations and so, where Folding@Home has come in is , to really come up with with new algorithms and also to come up with sort of computational power that you know actually we have more algorithm computer power than you can get if you use all [... ??? ...] computer centers combined and that has allowed us to do things that really other researchers couldn't do and so we hit a couple of early milestones that have already worked out quite well.

Our ability to predict binding of smalls molecules or drugs to proteins now has gone to a point where it's sufficiently accurate that it really could be useful for designing drugs and useful to the pharmaceutical industry and that's something that the paper on that is about to be submitted for publication and that's something we're really excited about.

I think something [... ??? ...] is that just even simulating the folding of a protein , straight from just the, sort of the, the most basic information, we already know the protein sequence , that really hadn't been done before and that was one of our early milestone and many of our papers are on developping methods to do that and how our predictions compared with experiments.

And actually there are several papers, there's especially this paper in Nature I think especially stands out.

So I think that, really, the history that I was describing was, sort of, up to the point where Folding@Home came onto the scene and since that, I think we've been able to ... one way to think about is that if you just do it by Moore's law we have about, maybe about a thousand tim..., a factor of a thousand times more power than most of our competitors in doing calculations, more raw computer power.

And if you ask how many years of Moore's law would you have to wait to get that, the answer is about fifteen years.

And so in a sense we can kind of do research that other groups could really only do fifteen years in the future.


NJ

So you're kinda leapfrogging the old technics with this new one......


VP

Yeah and so it lets us, sort of , do research that you know several generations in the future, not fifteen thousands years into the future you know we can't do every thing that we would like but it's sufficient that we're able to do things that I think a lot of people just would even thought would be impossible and that's the part that gets me excited when we can, sort of, do things in a way that really couldn't be done any other way.


NJ

And this has been a really great project for that type of things from what I've seen


VP

I hope so and I think that's certainly our goal is that if things could be done in a simpler way then we shouldn't be doing distributed computing to do it and also some distributed computing projects I think are more interested in just proving the basics of how distributed computing works rather than really caring about the application that's running


NJ

So [now (?)] I'm going to our next question, which is what makes this project different from other distributed computing projects, like Distributed Folding for example.

Why should people choose Folding@Home rather than maybe one of those ?


VP

Well, there's really lots of other interesting distributed computing projects out there, and obviously I'm biased in my view of them like any proud father or something like that, but with those caveats of the, sort of, obvious nature of my bias in this, I think one thing [... ??? ...] well there's a couple of different ways .

One is that we are doing things that are directly related to diseases, we have workunits that are [atom-direct (?)] Alzheimer workunits and Huntington's disease workunits and so on that are... Not that... You know a lot of projects are saying that they would eventually maybe be related to something that they were thinking about diseases would be interested in [... ??? ...].

we have things that are much more direct and we have experimental collaborations that are looking very promising in that area and hopefully our paper that will also go out in about a week with some new results on Alzheimer's.

So that's one area I think is directly linked to diseases much closer and we're actively really interested in [these diseases (?)], that's sort of a major thrust within our group.

I guess a step in terms of a [ Warning.png audio issues ?! Warning.png ] in terms of a track record , it's interesting to ... you know, unfortunately, nobody ever keeps track of what people have claimed to do in distributed computing or wether they've reached those claims...


NJ

huhuum


VP

And I don't think that's a role for me to do but I think it's interesting to look back at the things that we've done and the publications that have came out of it, I think there's maybe 15 or so papers that have come directly from Folding@Home ... maybe now it's closer to 20. And I think that is unmatched by ... you know by a factor of 10 [almost / or most (?)], or something like that. It's something where I think we have been extremely productive and productivity is not just in terms of claims that we have results but actual results that are peer-reviewed and really tested by the scientific community and really validated by the scientific community .

I think that those two areas [served our direct connection disease (?)] and sort of the track record that we've had, are the press that I'm most proud of.


NJ

So basically the fact that you actually get results is another good reason why people should join in


VP

At least for me personally, that's something that I do really care a lot about and it's not so much about the distributed computing per se as much as trying to do something which really couldn't be done in any other way.


NJ

Ok, so then, moving on with the next question being that Team MacOS X which I am a representative of, [here (?)], is primarily a team that runs Macintosh computers and there's a few questions that other team members asked me to ask you about, so we'll go ahead and get started on those ...

The main issue on their minds of course is the Tinker work units and you know I'm sure you get very frustrated with some of the stuff that comes down from those ...

I guess the first tough question I can ask is : How important are the Tinker work units to the overall project. For example: I'm a new folder , I get a Tinker work unit on my machine , it's gonna take maybe 4 to 10 days or more to complete. Why should I keep [adding (?)] instead of just bagging the whole thing or throwing that work unit away and hopping for a much faster Gromacs work unit ?


VP

Yeah, there are certain projects that we've been doing that are Tinker projects that have been running for over a calendar years maybe two calendar years and they're projects that are long-term, extremely important projects something that couldn't be calculated just in a few weeks even with Folding@Home ... and it's a type of thing where it's really quite a grand project otherwise it wouldn't take so long to do ...

I could describe the details but they're somewhat technical, but they're basically ... Some of the hardest things that we've ever tried is still going on with that.

And we're hopping actually within maybe about three months or so to be wrapping up the first set of those results for these long time Tinker results and setting those out for publication and I'm actually very excited about it, I think it's calculation that again would really push the boundaries from what one could think is possible to do


NJ

huhuum


VP

I think many of our new calculations have been Gromacs, especially the ones that ran for just ... a project that have just run for few months and ... [... ??? ...] the calculations [done with (?)] Gromacs are different than those that are done with Tinker ... there's certain things that can't be done with Gromacs and can only be done with Tinker ...

Amber , sort of, sits somewhere in between and there are calculations that are running Tinker that could also be running Amber and we're thinking about possibilities of switching some of those things over to Amber ... the ... In terms of scientific speed, Amber stuff is gonna be faster scientifically than Tinker and so in the areas that they overlap we're gonna be running more Amber things than Tinker but the nature of the scientific calculation that's done in Tinker is actually still very important and was part of the motivating feature for getting Amber online


NJ

Ok.

And is there a timeframe as far as getting Amber over to the Macintosh ? I understand they're still not exactly ready to go for the Powermac yet.


VP

Yeah.

I think we want to avoid a scenario similar to the Tinker scenario, in that Amber doesn't have SSE-specific code [ ... ??? ... ] Altivec [ ... ??? ... ] in the case of [ ... ??? ... ] doesn't have Altivec or SSE.

And uuuh ... so in many ways we're gonna have the exa... and it's also Fortran code ... so we're gonna have the exact same issues with Tinker optimization with Amber optimization.

And so for that reason, we , you know, we haven't [ ... ??? ... ] a big push to port it until we can figure out what would be a reasonable compiler to use there and the issues of optimizing Tinker are identical to optimizing Amber.


NJ

Now in my original questions that came up the next question was that Tinker has been notoriously slow for ... compared to Gromacs workunits especially on PowerPC hardware and it's said that Fortran , being the compiler used, is responsible for much of that slowness as far as not being as optimized for Macintosh.


VP

Yeah.


NJ

Now, of course we're working on that.

One of our team member is working very diligently on that, so I guess besides community members working on it, is there any internal people working on as far as optimizing that or is that just something that we're gonna wait and see what happens and then go from there ?


VP

Yeah.

Every once in a while we have various approaches that we're trying.

It's something where you... it's not small tweak and I think it's , I believe it's Tom that's working on the optimization on the ...


NJ

That's correct.


VP

... yeah I think Tom ... I've been trying to follow his thread there as well and it seems like by different compiler issues he's been able to speed it up a little bit but it's not gonna still bring it in line with what we can do with the Intel Fortran compiler on the x86 hardware.

Is that a fair summary of ... , at least, the current status ?


NJ

Yeah, that seems to be right... I mean, we're seeing... I think the latest one he saw was like 24% increase which is , it's reasonable, it's good compared to you know, what it is but yeah, I understand...


VP

One problem we had is that some of the early compilers... I believe there is a problem with the ABSoft compiler that for certain versions of OS X, there is a bug in the "square root" or "one over square root" function and so we could have released a version that was faster but it would only run on like OS 10.3 or something like that ...


NJ

Ok ...


VP

... and part of the problem ( and we'll have to see how Tom's stuff fits in this ) is that we have to have code that can run on a variety of Macs ... requiring someone to have the latest OS X version would probably cut some people out


NJ

Exactly and that's perfectly understandable.

Ok ... In terms of the overall project, have Tinker workunits actually been set to limit their being assigned to Macintosh machines or is that, it's just basically out there ?


VP

We're trying to do everything we can to keep Tinker away from the Mac because they do so much better, the Macs do much so better with Gromacs than Tinker.

The couple things we've done is that basically the Macs are only set to run Gromacs servers and also right now the default workunits which you get when there's a misma...

[ Warning.png audio issues ?! Warning.png ]


NJ

[ ... ??? ... ] have people on our team that obviously fold on PC as well as Macs


VP

Yeah [ ... ??? ... ] and that was a bit of a surprise , I see even just a lot of discussion about different ways to setup the PCs and things like that ...I thought that was very unusual and, actually, I thought that was kinda neat actually...


NJ

Yeah we, we, what we tried to do, well, on our team is we pride ourselves on being folding-centric rather than "a specific computer type"-centric.


VP

Hu huum


NJ

And so we're trying to make folding the forefront and we just like to see, we just like OS X better than Windows for various reasons


VP

What I also liked about that is that actually I think there are a lot people like me where at work we use Windows for [a variety of logistical things (?)] and things that have been imposed upon us.


NJ

Hu huum


VP

But at home, you know the only machine I have at home is actually an iMac so you know, so , I mean so, it's something where my wife would not use a Windows PC and since at home we get to choose what we use, you know, we do that, so I can imagine [there are (?)] people that are forced to live in the Windows camp but also [are interested in , but, sorry, (?)] would live in both camps.


NJ

Yeah, it's a very common story I hear quite a bit.

So, on that issue, one of the new technologies that come out on the PC is, for example HyperThreading.

There's been a lot of flap about HyperThreading and P4s and Xeons have become somewhat of a controversial issue, at least some on the user ranks.

There's the thing that has come down from you saying that you prefered if people wouldn't use HyperThreading .

And, one of the question people have is: how much of a liability is it to the overall project and why should users who can run HyperThreading and run more than one workunit at a time choose not to ?


VP

Yeah, a lot of this is kind of a lot of the calculations we're doing: it's hard to give a good analogy...

You know, it's [best on (?)] by giving the full technical details and those are in our papers. But try to , sort of, make it , sort of, a little more intuitive: a lot of what we're doing is almost like a race. And, you could imagine having a race car with a really powerful engine or something like that. And, you could choose to run that one race car that can go really fast, or you could have two cars that will run maybe each slightly faster than half the original race car ... instead of 300 mph, we're talking about like 180 mph or something like that.

And I think that's actually not a bad analogy to what the HyperThreading is.

You're not getting like, a lot out of HyperThreading , you're not getting twice the CPU power, you're getting a small incremental, maybe 10% increase, maybe 20%. And the problem is that in this race, if you have 2 slow cars instead of one fast car, you're not going to win the race by having 2 slow cars. you're going to cover ground more faster, sort of, on average, but in the end, you're gonna, especially if on some of these calculations we have to wait for everybody to finish. By having more people, sort of, running the slower cars, it means that the whole process takes a lot longer. And that's , sort of, what slows things down.

I think what I try ... I think the simpliest way that I describe this is that I think right now points-wise I think it is true that the HyperThreading is gonna get people more points and we're not at the point where, you know, we wanna .... You know, we want to encourage people to fold and if people are more comfortable folding that way , you know, we really do welcome everything that we get, which is, you know, that has been wonderful so far. Hummm, if people care about what would be the best thing: right know HyperThreading is basically taking a really fast CPU and turning it into 2 slower CPUs that are maybe slightly faster than half the slow but not dramatically, so ... and that actually, science-wise, is not ideal.

So that's, sort of, the scenario but either way, you know, we're happy with the wonderful contributions that we are getting.


NJ

And , one of the people asked me to ask: Is there any plans to limit or eliminate the hability to use hyperthreading or even more generally to run more than one core per physical processor in the client or core.


VP

Yeahh. no... actually our plans are just the opposite, which is to have the core utilize the processors themselves [uuh use like (?)] multiple processors themselves so instead of needing to run separate clients, the core would run 2 processes or 2 threads and use both threads to do the calculation and that's the best solution to this.

Because this way if you have a hyperthreaded CPU or SMP CPU, which is so common on Macs, then [stead of (?)] having to run 2 clients , [it's just (?)] taking care for you, the core will see multiple processors even if they're hyperthreaded and we'll just take advantage of it and there won't be any issues and in that case when you run these two... let's see it as SMP machine.. if you could run a core that uses both processors it would be like having your race car that is now , instead of half the s... instead of one race car,uuh two race cars that are 100%, then you would have on that is twice as fast and that's actually even better.

So basically our solution there lies in trying to really have the core take care of this... and that's something that's in progress, it's not something that is really trivial to do.

The scientific calculations are pretty tightly coupled and breaking, using 2 processors to do it is still , at some point, somewhat tricky to do well and right now the only software that handles it well is Gromacs in a threaded way and , so , i don't know, we'll have to figure out what's the best way to implement it, but that's something that's ongoing right now and we have, in the labs, some tests that are looking pretty promising ...

So that's the future of this , especially on the Intel side, uuh sorry, x86 side, there's gonna be more , more need for this, once the multicore CPUs come out.


NJ

Yes, that's definitely something you, I'm sure, you're eyeing with great anticipation.


VP

Yeah, and so we're trying to keep on track [such as, (?)] before that comes out that would be all ready [such as, (?)] instead of denying people , prohibiting the use of multiple cores or multiple CPUs, we want to try and make it easier to do that and, but yet at the same time making it help the science even more too


NJ

One other thing that you see as far as when it comes to performance and trying to get a C..., well workunit in on time, one of the thing you guys have implemented is a performance fraction and if you look through your log you can see this on, on , whenever you return a CPU or whenever you return a workunit and get your new one, it gets your performance fraction.

What significance does this actually have in workunit determination ?

Is there any future use of this that we have not seen implemented yet?


VP

Yes, so, actually, if you go to the server stats page, the webpage [... ??? ...] has all the server information, you'll see that certain servers have minimum performance fractions. And so, the performance fractions are used to send the fast CPUs to certain servers and then the slower CPUs to other servers.

Huuum, there's lot of benefit in doing this. One is that you want to make sure that these clients are given work units so that they can be done in a timely process and to try to keep everything [... ??? ... ] . And the other thing that's actually useful is that, again, it's most useful for us to have sort of a homogenous set of computers and grouping them by performance fraction helps really do that. so that's how it's used right now .

we actually in our last meeting , group meeting on Folding@Home, we sort of had a discussion about wether that's working out as well as we would like. We're thinking about various things to do about it although, it's much better than most benchmarks someone could have , like just a regular processor benchmark , because a processor benchmark , you know, it uuh, the performance doesn't tell you about how, how, what fraction of day that person is using the computer and so on.

The performance fraction is probably the most reliable thing we can have to say what's the probability of this person getting it back at what fraction of the deadline time. so I think right now it's used very heavily and I think we're planning on trying to be even a little more creative with how we use it and to try to keep uuuuh you know, every three months we have internal audits for how efficient Folding@Home is and to try to brainstorm ways to, sort of, tweak the efficiency without ruining the stability or making it harder for donators or whatever.

But , that's one thing that's come up as rising in our priority list [for... (?)] to address


NJ

That's kinda what I had wondered about, about the DGros and the Amber core. As far as them becoming a larger part of the project, you answered that in the affirmative, they will be. At least that what I got out of it.


VP

Yeah, they prob... We didn't talk about the DGros so much ... but ... the DGros are for particular, are for the drug-binding calculations and I think, they are about to roll out another big batch of those so I think they 're gonna come back a little bit to.


NJ

Ok ... Now , let's see... In the past, there've been people who have cheated to get additionnal points and, in their cheating, have actually done things that could cause bad publicity for the project.


VP

Huu huu


NJ

Are there any proactive steps [that've been/that are being (?)] taken by your team to protect the good name of this project ?


VP

Yes absolutely and I think the next version , version 6, will include some pretty wide sweeping changes to try to handle some of the problems that have occured.

I think so far everything is under control but I think it's the type of things where we have to be proactive about it


NJ

huuhuum


VP

and so we'll see if those changes stand version 6 [... ??? ... ] release timeline but those are, there are a couple things that are gonna be there.


NJ

Moving to some [... ??? ... ] more general-interest stuff ... I don't know if you have these figures in front of you or if you can just, you know, round them off the top of your head but ... what kind of growth has this project seen over , say, just the last year.


VP

Yeah.. Actually, I don't think I have that figure in front of me. I think on the website , there is a graph on the [... ??? ... ] folding at home dot stanford dot edu slash [stats (?)] dot html is a graph of things and I think [... ??? ... ] , and I think over the last year , i think, it is when we've gone from , like, averaging around like 120 000, now we're averaging around 170 000 . And so, that's still actually a pretty big jump you know, 50K, you know, that's like , you know, 40% or something like that.


NJ

huhuum


VP

hummm ... I think there'll be even bigger increases in this coming year but uuh we'll see then ... it depends on what gets rolled out and when.


NJ

One of the other questions [shows (?)] how many work units [complete (?)] on an average per day ... but you , you [kinda (?)] answered that with how many per hour, and you could kinda average that out over...


VP

I'd say you can get a sense of that also from the serverstat page.

The serverstat page is kinda cryptic because it changes so much and it's hard to , sort of, explain everything in there but it talks about how many Work units have been received by each server and it has totals on the bottom.

And each hour , it [ Warning.png audio issues ?! Warning.png ] I think the totals are on the order of like 4 000 or something like that or 5 000 .. And so, it is actually a huge number of work units coming through, you know, every day.


NJ

Do you have any idea how much bandwidth get used daily by all those work units coming to and from your servers?


VP

Every once now we do [a (?)] calculation and we get an audit from the university.

What's interesting is that we are usually in the top 10 biggest bandwidth users at Stanford but huuu I don't think we've ever been in the top 5 ... which is actually something I'm proud of , because we're trying to do everything we can to minimize the bandwidth ... not just for Stanford but also for all the donators. huuu but actually, I don't know the number [at hand (?)], though you can probably guess that's roughly a hundred thousand work units that go through a day, and each one on average is , I don't know maybe like half a megabyte, or something it's just sort of very rough back-of-the-envelope type of calculation.


NJ

huhum


VP

so if it's 100 000 times half a megabyte, that would be 50 , 50 Megs so ... huu , I don't know , I mean, I think ... 50 Gigs, sorry, sorry, 50 Gigs worth of data that gets sent to us and , since every time we get a workunit you probably get one, [... ??? ...] hundred Gigs worth of data transfered every day.


NJ

OK .. hummm kinda getting into the bandwidth for the end user-type deal.

On our team, a lot of people ask , how do they get their administrators or they work- or school- systems to start folding.

And one of the things that they hear is , uh, the administrator [... ??? ...] bandwith is minimal but, they want hard numbers they want, you know, a way to say ok, this is not going to be or it is going to be a problem.

Does your team provide in the information on how much internet or network bandwidth per processor, or even bandwith per workunit, Folding@home would average ?


VP

Yeah, that's [easily done (?)], I can provide it right now. I think roughly it's about like , it wouldn't be any more than a m... if you're not , as long as you're not doing big work units , then like a megabyte per processor per day


NJ

It's about a megabyte a processor a day?


VP

Yeah ... So I think people , donators who runs folding@home can do their own estimates.

It's not gonna vary drastically from that by more than like a factor of 2 or 3 or something.


NJ

Humhumm


VP

But , I think, in terms of bandwith , it's really pretty minimal ... if you have any sort of uhh... it can still be .. at home for a while we just had a modem, and so... I know how pitiful that could be, even , for you know , a few Megs but you know, [at (?)] a school or something like that, they're not gonna be on a modem and the bandwidth should not be an issue at all.

I think that the grander issue at times I think is the memory. [In so many (?)] schools, the computers in the schools don't have a lot of memory... And for those, if they're really on the low side, then Folding@home could become a problem. And by low I mean like 64 Megs or you know, something like that...


NJ

And staying on another bandwidth issue, since more and more people are gonna be migrating to DSL or Cable modem and things like that for the high-speed broadband, you've introduced work units that can be specifically asked for that are [...???...] of 5 Megs . Are you planning on having an increase in the size of those work units , if the 5 Megs work units are particularly successful ?


VP

Yes I think right now, the "big work units" box is at 5 units ... 5 Megs and greater and I forgot what the new limit is.

The new limit is pretty big , it's like 10 or 20 . So anywhere within that range of 5 to 20 is acceptable when you click on the "big work units" box.

we're still trying to keep these things small , as small as possible, but I think some of the big ones now, the data being sent back, is approaching 7 or 8 Megs or something like that .

So for some of the big ones , it is getting pretty big.. Humm, I think just for storage on our side there's only so big we can make these things, because for every work units people send back, we have to store that here, and our storage is not infinite , obviously, and so, that can really quickly add up and so we're probably not gonna get any bigger than that any time soon.


NJ

And.. let's see... On the 5 Megs work units , obviously, the RAM requirements have exploded as far as what they would use.

With the invention of newer and faster machines and cheaper RAM ,do you anticipate those requirememnts to keep increasing or have you kinda hit where you're kinda comfortable with, at this point ?


VP

Yeah, I think for most things, most of current calculations, uuuh, they're, the ram requirements, are gonna go up very significantly... uhuhhh ... there's a scoop I can give you guys which is out there, is the possibility of a another core coming online, which would be [very radically (?)] different types of calculations and [... ??? ...] very ... it's trivial, it's nothing , it's more like tinker [ ... ??? ... ] even less than that... but the memory requirements would be pretty high, more like a Gig.

And so, those would go out as big work units even though they're not big in the bandwith, they're big in memory and huuu .. those are ... I don't know when we're gonna be able to... we're running the core internally right now, but uhhh, so there would be some that would require a lot of memory .

We can make sure , though, that these don't run on machines that have huuuu not just huuu don't have enough memory but we wanna, we always try to make sure that we don't take all of the memory that people have and so, we'll, unless you're running something like advanced methods where it says that you're asking for something more complex.

But, yes , there would be things easily I think, that would require one Gig


NJ

Ok... well that kinda wraps up all the questions that I have and I really appreciate your time in [the (?)] answering the questions and , I guess we can wrap this up for now.

Did you have anything else you wanted to say or ... ?


VP

Huuu no, ... I guess I'd just huuu y'know I really do appreciate actually all the things that you've done...

The article with the website with Apple is really great and I think that was a huge help and contribution and also , all your work with the team , I think, is a great contribution for Folding@home and I really thank you for all your passion and all the hope you've had to give us .

And your team's done great things too and well especially [something (?)] I appreciate about [them (?)] is the fact that you guys are proactive about doing something about it and I think Tom's work with looking at the Tinker core is a great example about it.

So I , you know , I unfortunately don't have time to do interviews like this for every team but you guys have done so much that it was just a joy chatting we you guys .. and hopefully see you guys participating more in the future.


NJ

Well I really appreciate that. That means a lot coming from you and we will definitely continue our work to make sure that folding is a great success .

I really appreciate your time and I'm glad that we didn't get cut off ...


VP

Yeah me too .

Ok great, so thanks a lot and ... I hope we keep in touch...


NJ

all right...


VP

yeah thanks


NJ

Bye


VP

Bye

Personal tools