Skip to content
This repository has been archived by the owner on Oct 8, 2024. It is now read-only.

Rhythmic Complexity #89

Open
tr3sleches opened this issue Mar 8, 2019 · 18 comments
Open

Rhythmic Complexity #89

tr3sleches opened this issue Mar 8, 2019 · 18 comments

Comments

@tr3sleches
Copy link

tr3sleches commented Mar 8, 2019

So I solved abraker's rhythm complexity. What do you all think?

I started out with 4 assumptions

  1. Rhythm complexity decayed with time
  2. If time between objects remained constant for infinity, rhythm complexity converged to a value.
    The value that it converges to is explained later, the assumption is just the idea that it converges to the value.
  3. If the time between objects (delta t) is the same, rhythm complexity will only increase up to a certain limit.
    Rhythmic value does not have a limit (within reason), but the influence speed has on rhythmic value has a cap. Ex. 250 bpm streams and 260 bpm streams do not vary as much in rhythmic complexity as 170 bpm to 180 bpm streams do.
  4. Rhythm complexity increases from delta time based on a harmonic and the value of said rhythmic complexity.
    Basically, this assumption states that changes in delta time (increasing complexity in rhythm) increases rhythm complexity f by a multiple of f and a harmonic with period 2*pi/(interval). See abraker's for more info on this, though I apply it a bit differently.

The first assumption implied that if f(t) is rhythm complexity in terms of t time, then f'(t) is proportional to -af for a positive real number a. This will decay rhythmic complexity by e^(-a) % every millisecond (since time is measured in milliseconds for osu).

The second assumption implies that if time between objects doesn't change convergence to a value c occurs. The convergence from a rhythm complexity f(0) at t=0 to f(infinity)=c at t = infinity would be modeled as
f(t) = c + (f(0)-c)e^(-at).

The third assumption implies that c is proportional to a logistic function of delta t that always decreases (flip the typical logistic function across y-axis).
Example: c(Δt) = k1 + k2/(1+e^(0.1*(Δt-85)))
0.1 and 85 are subject to change and k1 and k2 are to be determined.

The third assumption aside (for now), combining the first two assumptions gives us:
df/dt = -af +sum(g(t-t_i)*u(t-t_i))
where f is rhythm complexity, g(t-t_i) is the function such that f(t) = c(Δt_i) + (f(t_i)-c(Δt_i)e^(-at), u(t) is the Heaviside step function, and t_i is the time of the object I. You will see this function a lot throughout this.
I combined them this way because both the decay and the non-homogenous portion affect how the graph changes and interact with each other. This models it accordingly.

After some Laplace transform manipulation, plug the function in the line above for f, you get
g(t-t_i) = a*sum[c(Δt_i)*(1-u(t-t_(i+1))].
g(t-t_i)*u(t-t_i) = a*c(Δt_i)*(u(t-t_i)-u(t-t_(i+1))) because u(t-t_i)*u(t-t_(i+1)) = u(t-t_(i+1)).
(Δt_i) = t_(i+1) - t_i.
While putting it all together, you add the fourth assumption. This is where it gets tricky.
This implies an increase in rhythmic complexity by a multiplicative amount when a change in delta times occurs because a 1/2 to 1/4 change in rhythm should increase the same percentage at 230 bpm as 170 bpm. If you add a fixed amount to complexity, the impact is less at higher bpms, when in fact the impact should be the same relative to the bpm.
Let p(T_i)=(1+b(1-cos(2pi*T_i)))
where T_i = Δt_(i-1)/Δt_(i-2)
Then
f(t) = [c(t_1-t_0)+(f(t_0)-c(t_1-t_0))*e^(-a(t-t_0))]*(u(t-t_0)-u(t-t_1))
+sum([c(t_(i+1)-t_i)+(p(T_i)*f(t_i)-c(t_(i+1)-t_i))*e^(-a(t-t_i))]*(u(t-t_i)-u(t-t_(i+1))))
for i objects
However, this poses a problem, p(T_i)*f(t_i) exhumes a contradiction. In that section of the equation, this implies that f(t_i) = p(T_i)*f(t_i), which is impossible unless p(t_i) = 1 or f(t_i) = 0.
It's time for difference equations. We now must find f(t_i) in terms of values that are not a function of f at certain points (It's not going to be in that form so that a computer can use).
We have to come up with a difference equation given this recursive relation.

f(t_i) = (c(Δt_(i-1))+(f(t_(i-1))-c(Δt_(i-1)))*e^(-a*Δt_(i-1)))*p(t_(i-1))
for each object
This part is easy to program in a computer. You can replace the Δt_(i-1) in the exponent with t-t_(i-1) to calculate the rhythm complexity at any point in time between objects i and i-1. Use this to calculate the rhythm complexity at the end of a 400ms chunk if the end of the chunk happens before the next object.

This might be all we need for the code, but sometimes referencing previous values of something can be troublesome, so I solved the difference equation just in case. Edit: Replace the m with an i
image
If I end up using this solution in the code instead (which I probably won’t) I’ll group the exponent of tn and ti together to avoid overflow

@girantinas
Copy link

girantinas commented Mar 9, 2019

image

Here's latex for the difference equation, so it's more readable. Edit: Fixed forgetting the distributive property.

@tr3sleches
Copy link
Author

tr3sleches commented Mar 9, 2019

The p(T) is supposed to be multiplied by the whole thing, not just the f - c part

@joseph-ireland
Copy link

joseph-ireland commented Mar 9, 2019

Just trying to follow along again.
Where does g(x) get t from when you only pass in t-t_i? Doesn't necessarily detract from the final answer, you can just start from just g(t) or

directly, although it's less cool looking. Maybe you could also justify skipping straight from f(t) = c(Δt_i) + (f(t_i)-c(Δt_i)e^(-at) to the final answer instead?

For your initial f(t), can't you change the boundary points of your step function so there's no contradiction? Then f(t_i) would reference f(t_(i-1)) instead of itself. Seems similar to your final difference equation, but looks like the p(T) factor is in a different place.

Also it strikes me that the most rhythmically complex pattern I can think of according to this is two triplets followed by a triplet rest repeated, taking advantage of that sweet max value P every other note. That also happens to be relatively easy to play in osu. Maybe any rhythmic complexity function needs to have more memory than just the previous delta t?

BTW you can easily embed latex yourself, just go to https://www.codecogs.com/latex/eqneditor.php and typeset your stuff, then copy the HTML from the bottom of the page into your comment, eg. . It would make everything so much more readable.

Finally, just want to say that I'm not trying to shit all over your work or anything, I couldn't do any of this stuff, especially at the rate you're churning it out. Just trying to understand and check for mistakes, since I feel like there might not be many people here with the relevant experience to do so.

@girantinas
Copy link

girantinas commented Mar 9, 2019

g(t) exists as a function used to define f(t), but we need to know what it looks like (this a brainstorming step rather than a justification step). Namely, for the non-homogeneous part of the diff EQ, we have to show the relationship between it, u(t) and c(t). Naturally, we're shifting everything over t_i, so realize f is actually a family f_i of functions, one for each hitobject.

As for the whole P thing, that's more a gripe with abraker's model (as fingers as harmonic oscillators). There's probably a different way to interpret this rather than the functions chosen, and that is what is missing right now.

@tr3sleches
Copy link
Author

tr3sleches commented Mar 9, 2019

Also, I’m thinking if this bonus is going to add to speed some kind of way, I think I will multiply whatever speed strain at an object by (1+f(t)) and make c = 0 and independent from speed.
I will start using that latex things though.

g(t) exists as a function used to define f(t), but we need to know what it looks like (this a brainstorming step rather than a justification step)

I mean, I gave you the function, I can show you a graph if you really want. Since a is a constant and c just changes with the time interval of the object not directly with t. For the purposes of a diffEQ of t, c is constant. So the function is just a*c from the current object to th next one and zero everywhere else. Honestly though, how g(t) looks isn’t really all that important because it only exists for defining f. If you mean to draw the convolution to show how g(t) impacts the definition of f or something, then no.

Where does g(x) get t from when you only pass in t-t_i? Doesn't necessarily detract from the final answer, you can just start from just g(t) or
https://www.codecogs.com/eqnedit.php?latex=\frac{df}{dt}&space;=&space;-af&space;+\sum\left(g(t-t_i)\cdot&space;u(t-t_i)\cdot(1-u(t-t_{i+1})\right
directly, although it's less cool looking. )

Idk how to do this LaTex stuff lol.
I defined g(t) like that to account for jump changes in rhythm when the time between objects changes. In a Laplace transform, since that’s technically a convolution, the Laplace transform of it is G(s)*sum(e(-st_i) (I’ll put this into laTex later, but I’m not at a computer rn)
I made it in terms of t just in case I had to return a value within an interval because of the chunking times of difficulty and such, so complexity can carry over from 1 chunk to the next (wanted to make it work for both my difficulty system and the status quo). Given what g is, I guess it is really not needed. If f jumps, g(t) needs to account for that and defining g(t) for each interval between objects was just easier. I could just use that equation above, 🤷🏿‍♂️. If I do end up making c equal 0 or a constant, I will probably just define g(t)in it’s entirety instead of across each partition.

Maybe you could also justify skipping straight from f(t) = c(Δt_i) + (f(t_i)-c(Δt_i)e^(-at) to the final answer instead?

I dunno, since I’m definitely going to change that harmonic, the function p might have some kind of direct correlation with t, like a continuous function that outputs randomness or something, and the diffEQ will become necessary (since it’s really not as this currently stands).

For your initial f(t), can't you change the boundary points of your step function so there's no contradiction? Then f(t_i) would reference f(t_(i-1)) instead of itself. Seems similar to your final difference equation, but looks like the p(T) factor is in a different place.

I thought of that myself, but I was tripping over myself so many times (labeling functions with the wrong i) that I didn’t want to. Short answer: yes.

Also it strikes me that the most rhythmically complex pattern I can think of according to this is two triplets followed by a triplet rest repeated, taking advantage of that sweet max value P every other note. That also happens to be relatively easy to play in osu. Maybe any rhythmic complexity function needs to have more memory than just the previous delta t?

You are right about that, some people in the discord and I were having a discussion about that yesterday. I’m not sure what I should replace p with though. If there was a way to track the change in randomness with the addition of an individual object, given that the influence on said randomness of objects past decays with time, I would use something like that instead.

@joseph-ireland
I’m glad you’re checking over my mistakes (for both of you actually). For the continuous one, I was scared nobody was going to because I knew I was making a mistake somewhere because some of the SR calculations weren’t looking right. While I was looking at the system to gather information so I could start to figure out how to do the difficulty calculation, I just noticed other flaws and just brainstormed in my head how I would do them differently. Other than for this rhythmic complexity bit, which was a spur of the moment thing, others I do and discuss are just ideas I have at the back of my head for how I would improve certain elements.
It’s so good to have people nitpicking on it, that means they took the time to understand it and act accordingly. In fact, I encourage it and wish more people would do the same.

@RogerDodger
Copy link

RogerDodger commented Mar 23, 2019

Sorry if this has already been discussed, since I can't find any previous discussion on this:

Why is rhythmic complexity being measured as a continuous function of time? A rhythm is a sequence of sounds (i.e. occurring over a segment of time), so calculating the rhythmic complexity at an arbitrary time seems like nonsense.

The difficulty for a player to comprehend a rhythm depends on the rhythms that came before it. A complex rhythm repeated often without any inversion is not as difficult as one repeated with small inversions each time, even if they are similarly complex in isolation.

I think this measure would reasonably be captured by the Lempel-Ziv complexity of the note intervals. It considers repeated sequences to have less complexity, and longer and more irregular sequences to have more. Other complexity measures might be better, but Lempel-Ziv is a simple place to start.

If this hasn't been tried already, I can write some code to calculate this complexity from beatmap files.

@tr3sleches
Copy link
Author

I calculated the rhythmic complexity in time to add bonus to respective areas where the rhythm was complex, or else you could have a song with a long part of simples rhythms followed by a short part of complex rhythms being red as more rhythmically complex than it deserves (compared to songs with the same short rhythmically complex section and a shorter simple section).
No matter what you do, a time weighing metric of some sort is needed.

I don’t know much about complexity algorithms (hence why I just used a harmonic as the particular part). I don’t know much about the Lempel-ziv complexity, but from what I see it seems promising. If you CS write the code to calculate the complexity, go for it. I researched the algorithm a bit; this is much better for pattern recognition and such. From what I understand, this can cause problems when different isolated patterns are used throughout the map compared to in near proximity of each othe (correct me if I’m wrong). If I could make the input sequences of this function a moving window of sorts, I would be golden.

Even though I have been thinking of trying to use some kind of moving entropy or something but that required a lot more math than I have time for right now (college tests right after spring break are a bitch).

Thanks for the response, even though most of it went over my head. I have a feeling that I am off about some points of this complexity algorithm, please correct where I am wrong.

@RogerDodger
Copy link

RogerDodger commented Mar 26, 2019

Do you have a link to abraker's model, so I can get an idea of where the approach you're working on is coming from?


Script to get the Lempel-Ziv complexity of beatmap note intervals.

Sample output.

The LZ complexity mostly just increases with song length and number of notes. Doesn't seem very useful.

It's a non-linear increase though, so it's working somewhat, just not to the degree I'd hoped. There is a small mapping of my personal perception of the songs' rhythmic complexity to their LZ complexity, but it's grossly overshadowed simply by song length and number of notes.

No matter what you do, a time weighing metric of some sort is needed.

Ideally, the complexity measure would see a song with long, repetitive rhythms and consider it as complex as the same rhythms repeated over a short period of time.

Limiting the measurement to a specific time frame I think might not map well to how a player experiences difficulty with respect to rhythms. As an example, in d:for the DELTA by Natoshi, while the rhythms of the main chorus and bridge are quite complex, they also repeat fully throughout the song. The consequence of this is that the map as a whole is a lot easier to complete than one which had similarly complex bridges and choruses, but they didn't repeat at all - i.e. they were all verses. If you only consider a certain window when measuring complexity, you'll miss ways in which players are able to find patterns that make it easier to play a song while considering it the same difficulty as another that's harder.

On top of this, I wouldn't want to see a song with two complex parts as equally difficult as a song with half the length and only one complex part. It might be longer - but it does indeed have more complexity!

On the other hand, as the LZ numbers demonstrate, if you don't take the length into account somehow, then you're just going to have longer songs dominating whatever measure you choose.

Sort of a tangent, but maybe it'd be better to think about this problem in terms of rhythmic difficulty rather than complexity. Probably the main reason not to do that is it feels more subjective.

@abraker95
Copy link

@tr3sleches
Copy link
Author

tr3sleches commented Mar 27, 2019

@ After looking at and researching the Lempel Ziv complexity,if you use that, but you decay each pattern influence with time and reset that after the recurrence of said pattern, this could work. Something like this:
B0430131-1D26-4C5B-9B85-33D41004A72E

The second picture is what it would look like if you had the same interval repeating over and over.
It would turn into a convergent series and converge.
2AB22596-FCC8-4012-9818-222BC1DD1AFD
I think this could work as a multiplicative value of speed (normalized of course).

You could actually kind of account for speed bonus like this (given that you don’t normalize the LZ for speed, but that is a discussion for later.

Edit: wrong picture, I accidentally posted my picture for my full derivation of speed from the ground up. Fixed it now.

@RogerDodger
Copy link

RogerDodger commented Mar 27, 2019

Thanks abraker.

And wow, that's a seriously clever model.

I can see how the equations here make sense now: there's a summation which iterates through every previous interval up until that point.

The harmonic function being sinusoidal means the peak of the harmonic function is exactly one half the time of the previous interval, and that's why as stated previously peak complexity would come from repeated triplets, which is an extremely simple rhythm.

The minimum complexity occurring after an interval of equal length definitely is obvious; but what would the second lowest complexity from an interval be? I'd say an interval with the simplest ratio to the previous interval, 1:2. That's a stark contrast to what the current harmonic function suggests.

Since the model requires harmonic to be 0 every interval time, t, between the related note and the note following that one, H(t0)=0. This also means H(t1)=0, and because that is true, H((t0 + t1) / 2)=1.

Why is this the case? An example of a harmonic function that satisfies both H(t0)=0, H(t1)=0, and the above idea [ that H((t0 + t1) / 2) is the second lowest point (edit: well, not really - just a local minima) ]: 4 - (cos(8x) + cos(4x) + cos(2x) + cos(x))

(The above function doesn't satisfy further criteria that could probably be decided, such as a ratio of 1:3 being easier than, say, 1:47.)

Following from this, I think the inappropriate harmonic used lead to the idea of adding a speed function too, but what does speed have to do with rhythmic complexity? As an example, consider the pattern:

x---x--x-

Versus

x---xx---

The latter seems to me considerably easier, being a 1/4 interval rather than 3/4. The latter is a simple division of the beat (into a quarter), whereas the former requires more conceptualization: either you think of the last note as 1/4 off beat, or it's 3/4 on beat - and 3 quarters is harder to conceptualise than 1 (etc. for shorter intervals; 7/8 is harder than 1/8).

(At the very least, you couldn't say that the off beat interval is significantly easier, which the speed function suggests.)

@abraker95
Copy link

abraker95 commented Mar 27, 2019

The harmonic function being sinusoidal means the peak of the harmonic function is exactly one half the time of the previous interval, and that's why as stated previously peak complexity would come from repeated triplets, which is an extremely simple rhythm.

Do keep in mind these models are applicable to single finger tapping only. Triplets may seem like a simple rhythm, but they certainly have a higher chance to throw your tapping off if you single tap them.

but what would the second lowest complexity from an interval be? I'd say an interval with the simplest ratio to the previous interval, 1:2. That's a stark contrast to what the current harmonic function suggests.

And then what would you consider to be highest complexity? Also what kind of complexity do you consider a pattern which keeps on halving the interval be (1/2 -> 1/4 -> 1/8 -> ...)?

Why is this the case? An example of a harmonic function that satisfies both H(t0)=0, H(t1)=0, and the above idea [ that H((t0 + t1) / 2) is the second lowest point (edit: well, not really - just a local minima) ]: 4 - (cos(8x) + cos(4x) + cos(2x) + cos(x))

Keep in mind why are still talking about single frequency harmonic components at this point. Harmonic function =/= Harmonic; Harmonic function would be including the sum of all harmonics interfering with each other, a harmonic is just a single frequency oscillation. I should go back and fix things that confuse/mix the two.

Following from this, I think the inappropriate harmonic used lead to the idea of adding a speed function too, but what does speed have to do with rhythmic complexity?

I explained this in the About Rhythmic complexity section:

It is not speed that needs to be factored to rhythmic complexity, but rather it is rhythmic complexity that needs to be factored into speed. Speed on its own can be a standalone measurement of difficulty, but not rhythmic complexity. Think about it; rhythmic complexity as defined here can only yield how affected by notes are by it on a scale from 0% to 100%. Either the pattern has a constant interval and has no rhythmic complexity, or the notes are completely irregular and have maximum rhythmic complexity. Without speed, rhythmic complexity is merely an abstract concept. Rhythmic complexity is not difficulty, but rather a description of how parts of patterns contribute to difficulty.

Following from this, I think the inappropriate harmonic used lead to the idea of adding a speed function too, but what does speed have to do with rhythmic complexity? As an example, consider the pattern:

x---x--x-

Versus

x---xx---

The latter seems to me considerably easier, being a 1/4 interval rather than 3/4. The latter is a simple division of the beat (into a quarter), whereas the former requires more conceptualization: either you think of the last note as 1/4 off beat, or it's 3/4 on beat - and 3 quarters is harder to conceptualise than 1 (etc. for shorter intervals; 7/8 is harder than 1/8).

This is a very poor example because when considering a pattern with as little as three notes, not enough notes have passed for rhythmic complexity really take effect. There needs to be enough notes for the interference to produce peaks that rival the influence of speed. In your example speed triumphs, so the 1/4 note requires you to single tap faster than the the 3/4 note, therefore second pattern is harder.

A better example would be considering a 180 BPM stream of notes with one note in the stream off beat. The formula should show that one note to be of high rhythmic complexity. Your fingers have a momentum going within the oscillatory movements as they alternate (two fingers) or vibrate (single finger), and to change that to some different frequency so sharply for one note is like hitting a brick wall. Desmos is not powerful enough to run that scenario, but I plan to code it up and test it hopefully soon. If the formula doesn't predict that, then it is indeed flawed.

@RogerDodger
Copy link

RogerDodger commented Mar 28, 2019

@tr3sleches Sorry but I don't really understand any of that. Not sure how to read it.


Do keep in mind these models are applicable to single finger tapping only. Triplets may seem like a simple rhythm, but they certainly have a higher chance to throw your tapping off if you single tap them.

They're relatively simple because it's easy to think of many much more complex rhythms. While they're not the easiest rhythm, any model that puts them as the hardest or even close to the hardest can't be very useful.

A better example would be considering a 180 BPM stream of notes with one note in the stream off beat. The formula should show that one note to be of high rhythmic complexity.

To clarify, you're thinking of the following?

x---x---x--- ... x--x-

and

x---x---x--- ... xx---

My understanding of the formula is that it'd assign more complexity to the latter pattern. The harmonic is the same for both, but the decay and speed function favour the earlier beat.

If that's a 180bpm stream, and the notes are all 1/4 intervals to the beat, both patterns are either easy because of an overlapping hit window, or impossible because of the speed. At most OD's, it's so fast they're within the hit window of an easier rhythm. Each symbol in this pattern is a 1/16 interval, or ~21ms, and being off by one symbol at anything less than OD=10 is within the 300 hit window (OD=9 is +-25.5ms for 300, OD=10 +-19.5ms). So you either play the former like it's not off beat at all, or the latter by double tapping.

And if the latter pattern doesn't fall within the hit window and is almost impossible to play, it wouldn't be because it's complex, but because of the sheer speed. Factoring this into the function for rhythmic complexity seems unnecessary and redundant, since pp already assigns difficulty for speed.

But regardless, that's also a pattern I don't believe I've ever seen in a ranked beatmap before. (The osu! editor says 1/16 IS NOT RECOMMENDED FOR ANYONE TO USE EVER.)

I may have misunderstood, though, and what you mean is that the notes are all on beat. That is, the notes are at 1/1 intervals to the beat, and the symbols are 1/4 intervals (~83ms). In that case, it's the same as the pattern I mentioned previously - the former is at very least equally difficult if not harder, but the latter is assigned more complexity by the current model.

In general, I think when trying to get an intuition for the complexity of a rhythm, thinking about it as slowly as possible is the way to go. Complex rhythms are just as hard, if not harder, at slower tempos.

Speed on its own can be a standalone measurement of difficulty, but not rhythmic complexity. Think about it; rhythmic complexity as defined here can only yield how affected by notes are by it on a scale from 0% to 100%. Either the pattern has a constant interval and has no rhythmic complexity, or the notes are completely irregular and have maximum rhythmic complexity. Without speed, rhythmic complexity is merely an abstract concept. Rhythmic complexity is not difficulty, but rather a description of how parts of patterns contribute to difficulty.

I might be misunderstanding the purpose of this exercise still. Isn't rhythm about intervals, not speed? Musical notation is written entirely in intervals, and a rhythm once understood can pretty easily be performed at any tempo, up to the physical limit of the performer's speed.

If speed were relevant, then it would be true that a rhythm at one slow tempo is easier than one at a slightly faster tempo. I think that the average rhythm is equally difficult at, say, 70bpm as it is at 100.

And then what would you consider to be highest complexity?

That's a really tough question. My current thought is that we can reason about what are comprehensible rhythms, decide on their relative difficulty, and then say everything else is incomprehensible (maximum complexity). This would be intervals like 5/17 or 9/23. The marginal difference in these is practically irrelevant, since nobody will hit them without pure luck (or an absurd dedication to learning that particular interval).

The osu! editor only allows intervals of 1/2, 1/3, 1/4, 1/6, 1/8, 1/12, and 1/16, so we have a finite set to reason about. My guess for the relative difficulties of these intervals is, in order of weight:

across both the numerator and denominator:

  • most unique prime factors
  • largest prime factor (2 easier than 3 easier than 5 etc.)
  • number of prime factors in total

Some examples, in order:

1/1
1/2 (= 2/4)
1/3
1/4 (1/2^2)
1/8 (1/2^3)
1/16 (1/2^4)
2/3
1/6 (1/2*3)
3/4 (3/2^2)
3/8 (3/2^3)
3/16 (3/2^4)
5/8 (5/2^3)
7/8 (7/2^3)
11/16 (11/2^4)
13/16 (13/2^4)
5/6 (5/2*3)
15/16 (3*5/2^4)

I think properly answering the question requires some empirical analysis, though.

First question this answer leaves: is a 15/16 interval really different from a 1/16 interval, or 2/3 from 1/3, or 3/4 from 1/4? 5/16 (and 11/16 and 13/16) is also probably not easier than 5/6.

It's probably also worth reconsidering what exactly the interval is of. The way I'm thinking currently, it's an interval of the beat, as that's how most people compose maps and how most people comprehend rhythm. This is in contrast with the current model where the interval is of whatever interval came before it. Doing that really loses a lot of the elegance behind the current model, though, and the question "what is the bpm right now?" doesn't necessarily have a decidable answer.

Maybe a harmonic can be created from a series of notes (say 3 or 4), rather than merely a pair.

Also what kind of complexity do you consider a pattern which keeps on halving the interval be (1/2 -> 1/4 -> 1/8 -> ...)?

It would quickly converge to infinitesimal intervals, which are physically impossible to perform but easy to comprehend. So I'd say low complexity.

@abraker95
Copy link

They're relatively simple because it's easy to think of many much more complex rhythms. While they're not the easiest rhythm, any model that puts them as the hardest or even close to the hardest can't be very useful.

It doesn't put them as hardest. That would go to the pattern of decreasing intervals for which you said:

It would quickly converge to infinitesimal intervals, which are physically impossible to perform but easy to comprehend. So I'd say low complexity.

Since you are talking rhyhtmic complexity in terms of comprehending the patterns instead of physical ability, then you are talking about something reading related. This is not what this formula tries to model. Reading is not something we can handle right now, and it's best to just focus on the mechanical aspects of difficulty for now. This formula is trying to model how various intervals throw off finger tapping. A pattern does not need to have various unconventional snapping to do that, but rather set up in a way that the finger's momentum would have more tendency of causing strain to the finger when attempting to adjust to a different tapping rate.

If that's a 180bpm stream, and the notes are all 1/4 intervals to the beat, both patterns are either easy because of an overlapping hit window, or impossible because of the speed. At most OD's, it's so fast they're within the hit window of an easier rhythm. Each symbol in this pattern is a 1/16 interval, or ~21ms, and being off by one symbol at anything less than OD=10 is within the 300 hit window (OD=9 is +-25.5ms for 300, OD=10 +-19.5ms). So you either play the former like it's not off beat at all, or the latter by double tapping.

Yes, lower OD will certainly make that note not be as hard, but I am currently not taking that into account. I am assuming near infinite OD and that the player's skill for accuracy is high enough to guarantee the player always hits at the center if the notes are repeating at regular intervals. So the problem is phrased such that the tiniest change in snapping matters

In general, I think when trying to get an intuition for the complexity of a rhythm, thinking about it as slowly as possible is the way to go. Complex rhythms are just as hard, if not harder, at slower tempos.

There is a point at which a bpm is hard to single tap because it's fast for one finger and hard to alternate because it's slow for two. That is definitely something to take into account later on.

@RogerDodger
Copy link

RogerDodger commented Apr 3, 2019

Is there a general process decided on yet for empirically testing these models?

Since you are talking rhyhtmic complexity in terms of comprehending the patterns instead of physical ability, then you are talking about something reading related. This is not what this formula tries to model. Reading is not something we can handle right now, and it's best to just focus on the mechanical aspects of difficulty for now.

Ah, okay. That makes a lot of sense. We're talking about different things.

Is there a reason we aren't interested in modelling comprehension/difficulty of actually getting the correct timing? That's mostly what I'm interested in, since I think maps difficult on this axis are pretty demonstrably underrated by the current system.


I think the question still remains, however:

Is a triplet rest physically demanding? Contrast with the pattern of minimal complexity,

x-x-x-x-...
x-x-x---...

The latter is physically harder, and by a significant amount, by simply removing a note? I'm not a speed player, so I don't know if this is necessarily true at very high bpms, but trying these out myself at 160bpm 1/4 intervals the triplets are very slightly easier to 100% acc with about the same unstable rate.

And if it is the case that that pattern is actually difficult but only at high bpm, it's unclear to me how your rhythmic complexity is not just an analogue to speed, nor how the model doesn't break down for sequences that aren't at a player's physical limit.

@abraker95
Copy link

abraker95 commented Apr 4, 2019

Is there a reason we aren't interested in modelling comprehension/difficulty of actually getting the correct timing?

You mean something like a probability distribution of hitting across offsets?

The latter is physically harder, and by a significant amount, by simply removing a note?

For this isolated case, both of these patterns have same spacing, so rhythmic complexity should remain same for each note. Please don't confuse having nothing before these patterns and having something before these patterns. Depending on what it is, rhythmic complexity can vary greatly for these sets of notes.

And if it is the case that that pattern is actually difficult but only at high bpm, it's unclear to me how your rhythmic complexity is not just an analogue to speed, nor how the model doesn't break down for sequences that aren't at a player's physical limit.

The point is to buff speed influence in the presence of something rhythmically complex.

@RogerDodger
Copy link

RogerDodger commented Apr 4, 2019

You mean something like a probability distribution of hitting across offsets?

I don't know for sure what model to use, but I mean rhythmic complexity. When I google the term, I find things like this which is talking about it entirely in the way I am (and also has some interesting ideas with some empirical evidence backing them). But since we're both using the word to mean different things, I'll try and clarify the differences:

  • You mean the mechanical difficulty of a player's finger(s) keeping in time with the notes
  • I mean the conceptual difficulty of a player's brain keeping in time with the notes

Is that accurate?

For this isolated case, both of these patterns have same spacing, so rhythmic complexity should remain same for each note. Please don't confuse having nothing before these patterns and having something before these patterns. Depending on what it is, rhythmic complexity can vary greatly for these sets of notes.

The pattern is repeated. You could precede it with a few beats as well, the question remains:

x-------x-------x-x-x-x-x-x-x-x-x-x-x-x-x-x-x-x-x-x-x-x-x-x-x-x-
x-------x-------x-x-x---x-x-x---x-x-x---x-x-x---x-x-x---x-x-x---

Both of these sequences are about the same mechanical difficulty (by my own testing, anyway). Does your model agree?

This is not even a remotely degenerate example either. Similar sequences often occur in ranked maps.

@abraker95
Copy link

abraker95 commented Apr 4, 2019

Is that accurate?

yes

Both of these sequences are about the same mechanical difficulty (by my own testing, anyway). Does your model agree?

No it does not, the model would say the latter is more difficult. The second note in each triple would receive higher weight than anything in the first pattern. I decided to play this at 240 bpm, and the first one felt harder due to the speed required to be maintained throughout. Meanwhile, the second one offers some points of relief, making it easier.

Looking at this from the point of view of theoretical rhythmic complexity, the second one should be more complex since it has more varied features. So the model I have created does correspond to that notion for this case, but there are also hints that complexity may not necessarily correspond to harder patterns for all cases.

All in all, I didn't expect this to be perfect, and I do have an entire section written out in the doc saying that testing is required and that the model is likely subject to change. I shared the doc not claiming it as a valid solution, but a theory. I would wait until actual testing on the model is performed and validated before trying to seriously punch holes in it.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants