Jump to content

Welcome to the new Traders Laboratory! Please bear with us as we finish the migration over the next few days. If you find any issues, want to leave feedback, get in touch with us, or offer suggestions please post to the Support forum here.

andro

Members
  • Content Count

    2
  • Joined

  • Last visited

Personal Information

  • First Name
    TradersLaboratory.com
  • Last Name
    User
  • City
    wien
  • Country
    Austria
  • Gender
    Male

Trading Information

  • Vendor
    No
  1. Python and Numpy, for code overfilled with "if"s Cython. And just to be precise I'm talking about code "vectorization" and "parallelization". But the same applies to CUDA.
  2. Unless you are executing the same process for many tickers ;-) From simple view of the world and our actions to it - yes it is not really possible to do stuff in parallel. But when you move a bit into what is really going on - you'll find a lot of things for parallel computing Anyway back to topic - CUDA is usually about heavy C programs. If your are able to do that - than try first to write backtest for your data on your CPU in C... maybe you will find the performance sufficient.... / like me, I was planning to the same. Then I found that by using optimized libraries and 4 CPU cores my speed up was :haha: 250x
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.