Jump to content

Welcome to the new Traders Laboratory! Please bear with us as we finish the migration over the next few days. If you find any issues, want to leave feedback, get in touch with us, or offer suggestions please post to the Support forum here.

  • Welcome Guests

    Welcome. You are currently viewing the forum as a guest which does not give you access to all the great features at Traders Laboratory such as interacting with members, access to all forums, downloading attachments, and eligibility to win free giveaways. Registration is fast, simple and absolutely free. Create a FREE Traders Laboratory account here.

BlueHorseshoe

Learning Vs Curve-fitting Vs Lag

Recommended Posts

Having systems that 'learn' from and adapt to changes in market behaviour seems like a great idea, but . . .

 

  • If a system is too receptive it learns too readily and curve-fits to inconsequential noise in the price data.
     
  • If a system seeks to avoid this by using large data samples to make robust generalised inferences about market behaviour, there is a risk that its resistance to change will seriously lag any significant but vital shift in behaviour.

There is necessarily a "sweet spot" between the two that indicates the optimal learning rate for the system.

 

However, to find this "sweet spot" one is faced with the precise same problem that I identified above. To mediate between the curve-fitted solution and the lagging solution a new variable must be introduced and it too must be mediated with some criterion for optimisation . . .

 

I can't find anything in the machine learning literature that I've read that suggests a viable way out of this catch-22. Does anybody have any suggestions?

 

BlueHorseshoe

Share this post


Link to post
Share on other sites
Having systems that 'learn' from and adapt to changes in market behaviour seems like a great idea, but . . .

 

  • If a system is too receptive it learns too readily and curve-fits to inconsequential noise in the price data.
     
  • If a system seeks to avoid this by using large data samples to make robust generalised inferences about market behaviour, there is a risk that its resistance to change will seriously lag any significant but vital shift in behaviour.

There is necessarily a "sweet spot" between the two that indicates the optimal learning rate for the system.

 

However, to find this "sweet spot" one is faced with the precise same problem that I identified above. To mediate between the curve-fitted solution and the lagging solution a new variable must be introduced and it too must be mediated with some criterion for optimisation . . .

 

I can't find anything in the machine learning literature that I've read that suggests a viable way out of this catch-22. Does anybody have any suggestions?

 

BlueHorseshoe

 

In 'plain' English:

That "sweet spot" is really not very sweet.

It is hard (that’s an understatement) to find a “variable” (actually a set of variables) that will consistently “mediate” ie

It is hard to stay close to that "sweet spot" … and worst of all

Even close to it produces sub average results

 

I chose to forego the large sample side (your second dot) and commit to the very granular side (your first dot).

Some constructs and ‘beliefs’ underlying my gestalt:

I had to resist the concept that there is “inconsequential noise” . I ‘know’/’believe’ there is “inconsequential noise” - but in my r&d, I had to act / proceed as if it didn’t exist. In the end, changes in the noise turned out to be pivotal information.

 

I personally left ‘signal generating’ machine learning to others and specialized in ‘categorization’ algorithms … which had to be further specialized to weighting simultaneous categories instead of narrowing it to one category from a set of discrete categories.

 

A lot of the info to be gathered from price streams for me turned out to be measuring micro swing scaling … what could be seen (in very loose terms) as fractional dimensions. I say very loose because the term fractional dimensions gets at capturing the concept, but it is not about using the ‘real’ fractional dimensions that Sevcik, et al calculate.

 

A lot of my progress came from just lucking into code for several excellent ‘music typing’ machine learning programs that helped me conceptualize the combinations of variations of cadence, ‘timbre’, tone, etc. for transfer over to granular price and volume data.

 

In the intraday time frames I work with, the half life of a ‘regime’ of these simultaneous categories is very short. A lot of plain old testing went into projecting the probabilities of what array would appear next. Then, detecting and loosely categorizing the noise, in effect, gives me ballpark weighting to slide the sizing around of a portfolio of (some pretty dumb, simple) systems. Sliding the weighting around more accurately makes me money by saving me money… especially in early detection of beginning and ends of congestions.

 

 

 

 

 

 

…there is some obscure work out there about formalizing the “sweet spot”. If I get some time will see if I have anything in the archives… but can’t even think of what terms to start searching on at this point…

 

Suggestion: Find your own way. It may be focusing in your first ‘dot’ above. It may be in the second dot. Or it may be in finding that “sweet spot” between the dots. In my experience – the one that inspires you most will at least engender the most perseverance and creativity. Hopefully, that one also fits with your aptitudes and talents...

Share this post


Link to post
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.


  • Topics

  • Posts

    • NFLX Netflix stock, with a solid top of range breakout, from Stocks to Watch at https://stockconsultant.com/?NFLX  
    • NFLX Netflix stock, with a solid top of range breakout, from Stocks to Watch at https://stockconsultant.com/?NFLX  
    • It depends. If you have lots of money that you can buy a house without a loan and if you don't have any parents to sponsor then it is a good idea. Otherwise it might be a bad idea depending where in Canada you are heading to. I earned a good middle income in my home country and I migrated to Vancouver 5 years ago at the age of 35. I had to start right from the bottom, lowest of the low.. Now i am finally earning a middle income in Canada but I still cannot afford to buy a one bedroom apartment. Having left behind friends, family and home, most of the times I think it is not worth it.   In short, do not migrate if you already have a good life in your home country and you are happy. Only migrate to Canada if you really have to leave your home country say there is a war or something really bad. Discrimination still exists here and its really tough for newcomers unless you are super rich. Good luck. David Chong, Quora  
    • This is bigger than the internet. Bigger than mobile. Bigger than social media.   While everyone was distracted by stock market fluctuations and political theater…   Most people have NO IDEA what just happened last week with ChatGPT.   Their new memory feature allows ChatGPT to remember EVERYTHING about you across all your conversations.   Think about that for a minute...   While most tech companies have been collecting mere breadcrumbs about you - your likes, your clicks, your browsing history - OpenAI is now collecting the most valuable dataset in human history: your complete psychological profile.   This is Zuckerberg x 5,000.   The more you use ChatGPT, the more it understands you, becoming a supercharged reflection of yourself that improves at an exponential rate.   Are you a regular ChatGPT user?   Consider whether it’s time to turn off the “you can train on my information” feature. To prevent your data from being used for training while still using the memory feature:   Disable Model Training: Navigate to Settings > Data Controls. Toggle off "Improve the model for everyone". Manage Memory Settings: Go to Settings > Personalization > Memory. Here, you can: Turn off memory entirely. Delete specific memories. Use Temporary Chat for sessions that won't be saved or used for training. Now the investment implications…   Why This is Bigger Than You Think Consider this: the relationship between humans and ChatGPT is evolving beyond a mere tool.   People are now treating these AI assistants as friends, confidants, and even romantic partners.   I'm not making this up - there are already documented cases of people ending real human relationships to pursue “connections” with their AI companions.   A viral Instagram meme shows a person going through life with a glowing, featureless humanoid figure - representing ChatGPT - as their companion.   The post has over 1.1 million likes and comments like "Bro ChatGPT is like my best friend. Ain't even ashamed to say it" with 25,000 likes.   But here's where things get really interesting for investors and entrepreneurs...   Three Things to Watch For starters, hardware is the next big thing for the big players.   The iPhone form factor is dead.   It hasn't meaningfully changed in nearly a decade. The next evolution in hardware will be designed specifically to interface with these AI companions.   OpenAI is already working on hardware with Johnny Ive, the legendary designer behind the iPhone and iPod. But you can’t ignore Elon Musk’s edge here.   So what does all of this mean for you?   The companies that control the personal AI relationships will be worth trillions. OpenAI and Elon Musk will have the coziest moats. We're witnessing the birth of a new internet - one built on agents that can communicate with each other across platforms. Google's new agent-to-agent protocol allows AI agents to work together without sharing internal memories or tools. The hardware companies that create the perfect interface for these AI companions will dominate the next decade of technology. And almost nobody is talking about what this means.   My prediction? Within five years, most people will have a personal AI that knows them better than anyone else. And they will interact with it in ways that seem foreign today.   (And, yes, it will almost certainly have dystopian elements.)   In the meantime, the biggest gains won’t come from household names. And, right now, James is seeing a prime opportunity to invest in the most under-the-radar plays in AI…   For dirt cheap. By Chris C. Source: https://altucherconfidential.com/posts/use-chatgpt-protect-yourself-now
    • KBH KB Home stock, nice day and rally off the 50.82 support area, from Stocks to Watch at https://stockconsultant.com/?KBH      
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.