linear regression - doParallel in R - Improvement in speed but CPU is not always utilised to 90%-100% -


i trying run many linear regressions , diagnostics on them , speed things using doparallel package in r programming language.

i have come across though interesting issue. although have seen performance improvement -as expected- usage of cpu not consistent.

for example, if run of code, cpu utilization may 30-40% cores.

if run again code, possible cpu utilization goes 90% without me changing in meantime.

in both cases, not running else on same time.

is there explanation why cores used 30% 1 time , 90% time without me changing anything?

i running windows xp, 4gb or ram, , have inter(r) xeon(r) cpu x5650 @ 3.67ghz.

my code looks like:

results <- foreach(i=seq(1:regressions), .combine='rbind', .export=c('lmresults','lmcsig', 'lmcvar')) %dopar% {model <- lm(as.formula(as.character(dat[i])), data=df) lmresults(model)     } 

your system has become memory constrained either within r or other programs taking memory resources. when happens, windows use hard drive virtual memory. has advantage of allowing more operations has disadvantage of being slower, much slower. struggled find way lock r's memory in physical ram instead of virtual ram realized that wouldn't because when launch parallel task, r launch new processes if lock main r instance's memory physical memory , make other running programs use virtual memory wouldn't because new processes ones being put in virtual memory.

it interesting you're running windows xp in corporate environment microsoft has stopped support of xp of april 8, 2014. being said you'd benefit more ram more benefit upgrading os version except of course windows xp has 4gb limit can't add more ram without upgrading os.


Comments

Popular posts from this blog

javascript - Karma not able to start PhantomJS on Windows - Error: spawn UNKNOWN -

c# - Display ASPX Popup control in RowDeleteing Event (ASPX Gridview) -

Nuget pack csproj using nuspec -