Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Allow types to be parameterized by integer (and bool) constant values. #884

Closed
wants to merge 11 commits into from

Conversation

quantheory
Copy link
Contributor

I was somewhat reluctant to push this now, because there are so many other things going on (and I personally have opened a few smaller PRs recently). However, I think that the basic design we discussed on the forum has reached a stable point.

This is also relevant to the motivation discussed for the macros-in-types proposal (#873). While that proposal does not directly compete with this one, it does suggest implementing some of the features that are the main thrust of this proposal through the macro system instead.

Rendered.

@quantheory quantheory mentioned this pull request Feb 18, 2015
@ghost
Copy link

ghost commented Feb 18, 2015

Thanks for putting this together. I plan to read through it more carefully but I wanted to mention that some of the type-level stuff in Shoggoth is currently being redesigned and is not fully reimplemented but should be back to normal in a few days. In the meantime, you may want to change the link from the RFC to this revision.

@milibopp
Copy link

To facilitate triage #262 and #273 should probably be closed in favour of this. On the other hand, a meta-issue to track advancing this feature in stages might be appropriate. @quantheory already listed this as:

  • Add integer primitives (and bool) to deal with the immediate issues regarding interaction with [T; N].
  • Add some combination of tuples, arrays, &'static, and any other sized built-in types I've somehow forgotten. Tuple and array types will, of course, only implement Parameter if the contained types do. If we add it, &'static doesn't need this restriction.
  • Allow struct/enum types with some strict conditions (e.g. they implement Copy, Eq is derived, all fields implement Parameter, all fields are public).
  • With CTFE, it may be possible to loosen some conditions a bit.
  • Add unsized types and floating-point values very carefully or never.

@ghost
Copy link

ghost commented Feb 18, 2015

@quantheory First, I want to say that I'm definitely on board with the idea that we need a way to be more precise about types, and indexing by numeric values and also other data is a feature I would very much like to have in Rust.

There are a couple of specific points I'll comment on, first regarding using macros and type-level programming to emulate or "fake" this functionality versus the constant values proposal, since it is mentioned in the RFC and the former is what I've been using and thinking about a lot recently. Then I'll conclude with some remarks from a slightly different perspective about this issue.

Like I mentioned in the other RFC thread though, I don't consider type-level macros to be intended as a direct solution to this issue, but more about making what's already possible more convenient.

comments on comparison with type-level macros

However, some degree of macro and plugin support is required to provide efficient support for natural numbers with clean syntax, which does imply some additional burden.

If a library provides items parameterized by naturals, its users will find the library difficult or impossible to use without the same plugin(s) that that library used.

If the standard library ever did so, it would likely be simplest to simply integrate this capability into the compiler.

It's true that type-level numerics implemented as a library would require users to depend on plugins of that library in the rest of their code. I also agree that not having these facilities builtin would mean additional programming burden. However, I think this is an issue affecting any feature-as-a-library.

I do agree that in an ideal case, if type-level numerics were to become a thing, it would be best if basic functionality were provided by the compiler and standard libraries. This is what GHC does for type level naturals. Idris also treats type-level naturals (well, they aren't exactly type-level anymore) specially: I believe they still have a unary (Peano) representation externally but an optimized representation internally. Theoretically, Rust could do something similar without much effort.

The proposed macros provide capability well beyond the provision of type-level naturals. A rather vast array of types can be defined and used with simple syntax (though this requires implementation work, rather than being automatic, as a data kinds feature would be)

I don't view type-level macros as a replacement or substitute for something like singletons or promotion with data-lkinds. I think that a comprehensive story for type-level programming in Rust will eventually have to encompass those ideas somehow. In fact, I think that it would be optimal to tackle data kinds and higher-kinded types at the same time, as part of a comprehensive kinding story.

However, it is worth pointing out that the macro system (with type-level macros) would already be enough to emulate much of what you would get with automatic promotion. It wouldn't be as ergonomic to use of course, but it's a low-commitment solution. In fact, most of the singletons library is derived automatically with template haskell, which is a similar idea. I think @jroesch even had such a macro working in Rust, maybe he can say more.

What a macro solution for that doesn't address though is how to have efficient internal representations of certain data. This is a more difficult problem but not one without some potential solutions and literature on the matter. (There would probably be some overlap with such a mechanism and a hypothetical "newtype deriving" feature).

Macros cannot easily introduce a new type or its value into scope, meaning that using a type-level natural N as a value requires a macro (e.g. val!(N)). Working with multiple incompatible type-level systems would require some care, if only to avoid name clashes.

I think what you are getting at here is that macros are not extensible. (At least, not as far as I know…). No doubt this is a challenge. It may not be as bad as it sounds though because one should be able to make the interpreters underlying the macros (which translate the expressions) modular with some additional effort. This is possible because traits are open and you can make the macro piggyback/defer to individual implementations in order to proceed.

Not all constant expressions can be used at the type level without an arbitrary number of plugin passes, since the plugin is needed to determine which types are present, which in turn can add more const values, which could in turn could affect more types..

I don't quite understand this point. If you mean that type-level macro could trigger computation, well, yes. But this is going to be the case for anything that isn't a canonical value, even for these constants using operations like +, isn't it?

If the concern is about non-termination, I don't even think it's possible to prevent that since there's no way to provide evidence that any plugin produces its complete output in finite time. There's also the issue about associated types…

Type-level natural numbers (or integers) are not themselves given a Rust type. They can in principle correspond to arbitrarily large numbers (like bignums).

I'm thinking what you mean here is that type-level representations of data don't have a unique kind? Crucially, at the term-level they do have types, which is both how they can be manipulated using associated types and how it's possible to bridge the term and type level using singletons, i.e., types with a single value.

struct _0;
struct _1;

let two: (_1, _0) = (_1, _0);

In other words, each representation of the value at the type-level is a unique type, it's just that without kinds there is is no way to classify the extent of such value representations.

However, it is possible to approximate kinds with traits:

trait Bit {}
impl Bit for _0 {}
impl Bit for _1 {}

In fact, this seems to be very similar to what you are doing with the Parameter trait if I understand correctly. Although in the example it looks like Parameter corresponds more to a sort classifying a kind, allowing an approximation of kind-polymorphism (Shoggoth had a similar thing at one point with A: Ty, M: Tm<A>).

In order for that to be effective, the compiler will need to know it is a "sealed" trait (e.g., has only a fixed number of known implementations), which unfortunately is not a concept that can currently be expressed in Rust. Maybe this is what you mean by special compiler support. (I'll come back to this point later.)

comments on const value parameters

Const parameters can be used as values directly, with no val! macro or other special syntax.

I'm not sure if this is an apples to apples comparison though. You're still proposing adding special syntax to the type system, e.g., const and {...}, along with special interpretations of arithmetic operators. Having these builtin would provide a more natural syntax than is possible with macros, that much is certainly true, but it doesn't eliminate all of the syntactical overhead as long as Rust still makes some distinction between term-level and type-level things.

overall comments

Just to re-emphasize, I agree with the principle behind this RFC and want the kind of functionality described. I have some different thoughts on how to get there but I actually think it's more a matter of perspective which I'll try to explain.

I think one of the keys issues here is what all of this should mean in terms of kinds. You do mention data-kinds and briefly higher-kinded types, but don't go into a lot of detail.

My thoughts are basically that in order to do this nicely with regard to future extensibility of the language, the constants should really have a kind, distinct from type and lifetime. Once you commit to that idea, it makes sense to have kinds for bool, nat, str and other types. It also makes the distinction between closed (basically canonical values in this case) and open terms (e.g., N + 1) seem a bit arbitrary. There wouldn't be a need for Parameter anymore with kind-polymorphism, and the const N: T syntax could be generalized to a kinding judgement, either with special syntax (maybe N:* bool) or even just reusing :.

At that point, I think the remaining distinction between providing the described RFC functionality as a special case of an embedded arithmetic language versus fitting into the more general type-level programming picture boils down to a matter of efficiency. Algebraic representations of numbers in unary fashion are just unusable, I think we all agree. Binary representations are efficient enough to use but perhaps not optimal.

However, I don't think we necessarily have to make a choice between efficient builtin representations and less efficient but easy to program with external representations. It seems quite possible to provide builtin efficient representations for some kinds like bool, nat, and str, along with a convenient literal syntax that doesn't require macros as part of the compiler and standard libraries. Additionally, the libraries could expose an inductive representation to allow programming with. This would correspond to being able to write HLists like hlist![a, b, c] but still program with them using Nil and Cons. It should even be possible to support specifying efficient internal representations of operations otherwise implemented with type-level programs through a partial-evaluation and reflection based API.

So I think it's possible to reach a point where more or less the same functionality is available, along with mostly the same convenient interface, but without having to special case arithmetic and bools and forgo more general abstraction facilities like type-level operations implemented using traits.

To be honest, I suspect it would require a similar amount of effort to implement this RFC directly as it would to implement a better kind story, along with sealed traits, promotion/singleton literals. (The proposed functionality is essentially relying on these ideas implicitly I think, just not as first-class features.) But with the latter, we'd have a lot more flexibility in the long run and less pressure to add a whole slew of specialized arithmetic operations and related functionality because they could be defined by library authors.

Just my 0.02, and like I said, I don't think the conclusions are that different, maybe just how to get there.

I also want to say, I'm not suggesting delaying such a feature until a better story about kinds is actually lands in Rust, especially since not having constant values for arrays is a real pain at the moment. Just my opinion that these points should perhaps be considered more thoroughly before going too far down the rabbit hole.

@AndyShiue
Copy link

+1 for kinds, or at least do it forward-compatibly ;_;

@glaebhoerl
Copy link
Contributor

@freebroccolo (Going with GHC as the reference mainly because I'm familiar with it): In GHC without extensions there's the arrow kind -> and the base kind * (could be pronounced "Type"). Turning on ConstraintKinds adds another base kind Constraint, and DataKinds adds a new base kind for each declared datatype: Bool, Maybe, etc.

What I'm wondering about is if it would make sense to structure this slightly differently for Rust, by having a single base kind for compile-time constants, parameterized over the constant's type. So using GHC syntax our base kinds would be * (Type) and Const a, so e.g. Const Bool, Const Int32, Const (Maybe Int32), etc. (and later perhaps Trait/Constraint). (Leaving Rust syntax aside for now, but const Foo: T in parameter lists could perhaps be some kind of sugar for it.) I don't have any hard arguments for this or against it, but intuitively, it feels like it might be nice to keep all of the data kinds "under one roof" in this way, instead of "dumping all of them into the top-level" like GHC does. Others might perhaps have better insight into what practical ramifications this would have.

@quantheory
Copy link
Contributor Author

@freebroccolo

I think that you discuss a lot of relevant points, so thanks for sharing your thoughts. We are already on the same page about the majority of this, so I'll confine my response to the points where I think that there may still be some confusion or real disagreement. Nonetheless, I'm afraid that this will be quite long.

I don't quite understand this point. If you mean that type-level macro could trigger computation, well, yes.

Hmm, maybe examples will help. I admit that the second one below is contrived (I made it up off the top of my head), and uses associated consts, which are not implemented in rustc yet. Still, it's plausible that the general pattern could be useful down the line.

// Example 1: This could be difficult.
const TROUBLE: usize = 32;
struct Foo<N: Nat> {};
fn something_dull<N: Nat>() -> Foo<Expr!(8 * N)> { /* do something */ }

// Example 2: This is worse.
trait SomeTrait {
    const N: usize;
}
fn use_some_nat<M: Nat>() { /* do something */ }
fn some_fun<T: SomeTrait>() {
    const DOUBLE_TROUBLE: usize = 2*(<T as SomeTrait>::N);
    use_some_nat::<Expr!(DOUBLE_TROUBLE + 64)>();
}

Even to have the plugin implementing Expr! be able to deal with named constants in global scope requires it to be run late and to get information from the compiler beyond syntax trees, since otherwise (a) it can't tell whether any given identifier refers to a const value or a Nat type, and (b) it needs to know the actual value behind the named const. Nonetheless, example 1 is a perfectly ordinary thing to do, and for the special case of [T; N] arrays, already possible now.

Example 2 is worse in that now it really is impossible to handle unless the Expr! is re-evaluated every time some_fun is instantiated for a new type.

I'm thinking what you mean here is that type-level representations of data don't have a unique kind?

Yes. I should have talked more about coming up with a kind story, since it's apparently not clear that that's exactly what I had in mind. So to be quite explicit, the point of this proposal is in fact that bool, u8, i8, and so on really are promoted to kinds. I have had some difficulty in figuring out how to write this:

  1. Rust users have diverse backgrounds, and I never know whether it's more helpful to use jargon from type theory, or use more C++-like language.
  2. As you note, there's not much systematic discussion of kinds in Rust. Actually, the reference does have a "type kinds" section, but it discusses Send, Copy, and 'static (and Drop), rather than arity of type constructors or anything of that nature. I think that by the time we introduce higher-kinded types (assuming that we do, and for lifetimes we may have to), we will have to settle on some better/broader way of talking about kinds. But I sort of dodged the question for now (in particular, I'm thinking that higher-kinded types is the feature that will really require tackling it).

Although in the example it looks like Parameter corresponds more to a sort classifying a kind, allowing an approximation of kind-polymorphism.

Just as a very brief note, perhaps we should start thinking about variadic types as representing another form of kind-polymorphism. I hadn't given that much thought yet.

I guess we could think of Parameter as the sort of all kinds that have been promoted from types. However, in a language that doesn't have much expressiveness regarding kinds, there's really no way to talk about sorts, so this might be interesting but not useful. Hence the reverse approach, defining a trait implemented by all types that can be promoted to kinds.

Although it's a separate thing, my mental model here is that there is a const kind, which would be the kind of all types that can be _de_moted back to values. So rather than having a unary kind constructor as @glaebhoerl suggests, I was thinking that const is the kind of which all kinds promoted from types are sub-kinds. You can then talk about kind-level covariance, in that kind constructors being promoted from type constructors may produce sub-kinds of const if applied to sub-kinds of const. This is relevant if you say that when a primitive type can be promoted to a kind, so can types that use it (e.g. struct Foo(i32); could be promoted).

In order for that to be effective, the compiler will need to know it is a "sealed" trait (e.g., has only a fixed number of known implementations), which unfortunately is not a concept that can currently be expressed in Rust.

Yes. I was thinking of Parameter as initially having a fixed number of impls, but if we expanded to a full dependent type system, it would not be sealed. If we expand to having a fuller dependent type system, my mental picture is that it would come to resemble Sized, in that the compiler would automatically consider all eligible types to implement it. That would be more-or-less the equivalent of Haskell's behavior with the DataKinds extension, I think.

There wouldn't be a need for Parameter anymore with kind-polymorphism, and the const N: T syntax could be generalized to a kinding judgement, either with special syntax (maybe N:* bool) or even just reusing :.

I'm certainly open to syntax changes, but as I hinted at above, I view const as introducing a type that can be automatically demoted back to a constant value (you can even view the existing uses of const as meaning the same thing if you squint). This means that there's a concrete distinction between what the compiler can do with these types versus types that are not of const kind, which is also the distinction that Parameter makes, in a different way.

However, I don't think we necessarily have to make a choice between efficient builtin representations and less efficient but easy to program with external representations.

I think our disconnect is that I really haven't placed a lot of value on the latter. I find inductive representations to be entertaining but not especially "easy to program with", and so I have difficulty finding the motivation to give them a central role here unless it's clear what additional capability they provide.

We already have constant expressions, a plan to implement constants that are outputs of traits, and arrays that accept usize constants as an integer parameter. To me, the fact that these expressions typically use integers of a primitive type is a significant motivation to promote the primitive integer types to kinds rather than to focus on a nat kind. (That doesn't mean that we couldn't also have an efficient nat, but providing the sort of API you're suggesting to handle that is really adding an additional feature that's not required for the primitive types.)

This is also how I would respond to the comment about implementing arithmetic as a "special case". If we started with this proposal and later expanded to a fuller dependent type system (assuming CTFE as a part of that), there would be nothing special about type-level integers except the specialness that's there purely as a consequence of the fact that value-level integers are already special themselves.

To be honest, I suspect it would require a similar amount of effort to implement this RFC directly as it would to implement a better kind story, along with sealed traits, promotion/singleton literals.

I think that doing the latter, in a way that integrates well with value-level consts and other existing Rust features, requires tackling a superset of the the features proposed here. So I don't agree with the comparison.

Just my 0.02, and like I said, I don't think the conclusions are that different, maybe just how to get there.

Yes, I think/hope so. It may be good for me to think about edits to the RFC that make it more clear how I picture this fitting into a kind system.

@ghost
Copy link

ghost commented Feb 19, 2015

@quantheory Thanks for responding. I wanted to comment on a few of the points you raise and try to get to the others when I have more time:

However, I don't think we necessarily have to make a choice between efficient builtin representations and less efficient but easy to program with external representations.

I think our disconnect is that I really haven't placed a lot of value on the latter. I find inductive representations to be entertaining but not especially "easy to program with", and so I have difficulty finding the motivation to give them a central role here unless it's clear what additional capability they provide.

From my perspective, the inductive representations are crucial for being able to define new type-level functions/operations using traits and associated types. Unless I missed something in the RFC, there is no way to introduce new functions/operations in this proposed arithmetic fragment. In order to do something like that, you'd need at least pattern matching and recursion or a higher-order induction operator and lambdas. If the only way to introduce new functionality into the arithmetic fragment means having to make more things builtin, I don't see how this is going to be useful in general for type-level programming.

As a meta-comment, I don't think you can even really have "dependently typed programming" without providing convenient inductive representations of data just due to how type information has to be propagated throughout structures and computation and how things like proof witnesses are constructed. You can provide non-inductive things with some opaque internal structure along with various builtin higher-order induction operations, but these are far less "easy to program with".

For example, consider the case where you need to provide evidence that an integer satisfies some property. Maybe you need an operation that only works for some T<N> when N < K. How do you provide this information? Maybe you can provide a built in decidable relation for < but what about other specialized properties the developer might need? Even if you wanted to, you couldn't just bolt on a constraint solver either because not all properties are decidable. You need to have the ability to construct proofs of these properties inductively, at least in principle.

This is also how I would respond to the comment about implementing arithmetic as a "special case". If we started with this proposal and later expanded to a fuller dependent type system (assuming CTFE as a part of that), there would be nothing special about type-level integers except the specialness that's there purely as a consequence of the fact that value-level integers are already special themselves.

What concerns me is I have a hard time seeing a clear path from this RFC and a future Rust with a good type-level programming story across the board. It seems you're suggesting several other non-trivial RFCs in order to get to a point that might be comparable to what can already be done with traits/associated-types/macros.

I'll be the first to admit that type-level programming with traits/associated-types/macros is quite clunky and I'd definitely prefer a better way. But given the flexibility and generality it already offers, I'm hesitant to welcome an alternative that won't have comparable flexibility unless several additional unspecified changes to the language also land at some point.

For example, I don't see how from this proposal, even with some of the future RFCs you've suggested, that I could implement most of the existing or planned features in Shoggoth. Maybe that's okay, maybe this RFC and the follow ups aren't really intended to address such things. But if not, I think there ought to be a stronger case for why constant value parameterization deserves such extensive changes and dedicated machinery when it doesn't address so many use cases.

From my perspective, the most convincing argument for following this RFC seems to be convenience in certain use cases. As far as arrays are concerned, constants seem like a reasonable idea and are definitely worth pursuing in the short term. But as far as following this approach and generalizing with several follow up RFCs, I'm not convinced yet on the technical side, especially concerning such open ended and unspecified things like CTFE.

For convenience in the general type-level programming case, I think we can achieve this sooner, with less effort, and less future complications by building on what we already have in smaller steps (sealed traits, data-kinds/promotion/singletons, efficient type representations) rather than by introducing entirely new mechanisms (constant values vs traits/associated-types, CTFE vs macros) which require such extensive changes.

To be honest, I suspect it would require a similar amount of effort to implement this RFC directly as it would to implement a better kind story, along with sealed traits, promotion/singleton literals.

I think that doing the latter, in a way that integrates well with value-level consts and other existing Rust features, requires tackling a superset of the the features proposed here. So I don't agree with the comparison.

The reason I say this is because in order to implement the proposed functionality in the RFC, you will already need to have some internal notion like special kinds for the different type of constants, you will need to treat Parameter like a sealed trait, etc. The rest follows similarly. Essentially, you'd be making these distinctions already in the compiler. The difference is all this would be specialized handling of constant values instead of being exposed as a feature.

@pcwalton
Copy link
Contributor

Wow. This, like, the model of a good RFC. From a quick glance it seems to mirror a lot of my thoughts about generics and constants, but a lot more in-depth and thought-out. I need to spend time and digest this more thoroughly :)

@quantheory
Copy link
Contributor Author

@freebroccolo

  1. I should retract what I said about inductive types. Once I stopped to think about it, I can think of quite a few things that they make possible.

  2. Parameter was one of the last things added to this RFC, and one of the pieces that I've thus had the least time to think about. While I don't think that it represents such a big implementation burden, I could easily be convinced to remove it.

  3. My experience is that working with arbitrary usize constant expressions, with the same flexibility arrays have, has been the most frequently mentioned use case by a wide margin. Working generically with fixed-size chunks of memory is really a key systems programming feature for a lot of people who don't otherwise have a very high interest in generic types or type-level programming. So my perspective has not been that we should get a dependent type system in a very general/quick/cheap way, and see how much we can do for the special case of generic container sizes along the way. Rather, I've been treating it as a constraint that we have to have very flexible and intuitive functionality for integers (or at least usize) early on, as the most important case for the community at large. Given that constraint, I want to do it in a way that can plausibly be built out to get a real dependent type system, and from that perspective I think that this RFC is actually pretty conservative, in that it doesn't go very far beyond that functionality.

    That does mean that the promoted integer values won't be a drop-in replacement for type-level naturals, not for the foreseeable future if ever. However, though I know that that's disappointing, it doesn't make the situation worse for type-level naturals either, and it doesn't bar pursuing the macro/plugin approach further.

  4. I think that there is quite a lot that you can do on top of this. I know that you're somewhat skeptical about CTFE, but I think that in combination with data kinds it could be quite powerful, and provides the possibility of defining custom operators that can be promoted to the type level along with the data they act on (e.g. type-level +). (For the record, I'm also skeptical that providing an API for plugins to control the compiler's representation of types involves less complexity or implementation work than CTFE.)

    In the much shorter term, if this RFC gets much support, I'm planning a follow-up RFC that adds constraints on const parameters. This includes boolean conditions that act as assertions (so you do get > as a special case, but also anything else you can write in a constant expression). Also using match patterns as a way of providing constraints that are suitable for coherence checking, since I think that there are some imperfect but still very useful analogies between match patterns and trait impl constraints.

    One last thing could be impl for promoted values, something like impl SomeTrait for const B: bool, and working with constant values as types (rather than constants) in a broader context. I admit that I have thought much less about this, but it's something to consider.

@pcwalton

Thanks! I know it's a busy time to open a request like this, and it even might end up being postponed for that reason alone, but I'm trying to at least be a good citizen regarding the form of the RFC.

@Gankra
Copy link
Contributor

Gankra commented Feb 21, 2015

Wow, this is fantastic! I'm glad you settled on a conservative primitives-only design. That's the functionality we really need out of this to talk efficiently about stack-allocated structures.

I'm still a bit nervous about arbitrary expressions ({n + 2}), but the plan you've laid out for them is pretty conservative and reasonable. I could live with it. Especially since it seems like we could implement literals-only to start, and upgrade our way into expressions, possibly pivoting if we run into trouble.

I'm also a fan of the alternative <A, B=default; C: T, D: U=default> syntax, since it seems cleaner (and I like the metaphor to [T; N]). It perhaps stinks for something that wants to "naturally" group consts and non-consts in some special way, (e.g. a hypothetical multi-dimensional storage type might want to group the args to look like [T; n, U; m, V; r] rather than [T, U, M; n, m, r]), but that's a kind of weak argument in my opinion.

@ftxqxd
Copy link
Contributor

ftxqxd commented Feb 21, 2015

Nice RFC! I like the const syntax because it could easily fit in well with a potential full kind system, and matches the syntax for associated constants. I am a little worried that that syntax is too similar to C-like declarations (e.g., int x rather than let x: int), but I don’t think there’s a way around that (even if we chose a syntax like <T:: const: u32>, we’d need to do the same with associated items for consistency, and thus all other items as well, and we’d have to declare types like let x:: type<type> = struct<T> { x: T }; or something, which is awful).

I’m not a huge fan of the {expr} syntax, though—something like (expr) would be much nicer, but unfortunately that clashes with the existing (but rarely-used) (type) syntax. I’ve got a draft RFC that proposes removing the (type) syntax in favour of <type> (which is actually already allowed as a by-product of UFCS (but is not yet implemented)) for this reason (and also because <> are just more common in types anyway).

@Diggsey
Copy link
Contributor

Diggsey commented Feb 22, 2015

The ';' syntax for distinguishing non-type parameters is potentially problematic because it makes it impossible for type bounds to depend on non-type parameters:
fn make_tree<const N : u32, T : Tree<N>>()
=>
fn make_tree<T : Tree<N>; N : u32>()
That is, unless you allow use of generic parameters before they are declared.

@gnzlbg
Copy link
Contributor

gnzlbg commented Feb 23, 2015

It's not clear whether const parameters could be added to variadic parameter lists, or if it will be necessary to limit variadic behavior to types.

I see no reason why this shouldn't work.

@quantheory
Copy link
Contributor Author

@P1start I don't have a particularly strong feeling about {} vs (). We just need to somehow avoid ambiguities from symbols like <, >, &, and [].

@Diggsey We already allow use of parameters before they are declared for types (as long as the declaration is later in the same list), so I don't think that this is an issue. Even if it was, you could always just move the bound to a where clause.

@Gankra
Copy link
Contributor

Gankra commented Feb 23, 2015

@Diggsey we already allow both:

Foo<T: Bar<U>, U> and Foo<U, T: Bar<U>>. In general you can have quite tangled type dependencies with complete disregard for order in Rust today. The only place I'm aware of where order matters is default types.

  • Foo<T = Bar<U>, U = usize> doesn't work
  • Foo<U = usize, T = Bar<U>> does

However this does raise some concerns for me with how exactly this proposal interacts with defaults. In particular the precise syntax is less of a bikeshed than I thought.

  • Foo<T, U=Bar; N: u8, M: u8 = 5> implies
    • Foo<T; 5>
    • Foo<T, U; 5>
    • Foo<T; 5, 5>
    • Foo<T, U; 5, 5>
  • Foo<T, const N: u8, U=Bar, const M: u8 = 5> implies
    • Foo<T, 5>
    • Foo<T, 5, U>
    • Foo<T, 5, U, 5>
    • ... Foo<T, 5, 5>? or just Foo<T, 5, _, 5>?

The last case without the _ is technically possible for primitives, since 5 is unambiguous with a type.

Dependency resolution for the ; syntax with its two stacks of defaults is also less clear cut. The "no back references" rule of the current stack system could be expanded to "no cycles", where the basic stack-order of <A = T, B = U> implies an "edge" A -> B. So <T, U=Bar<M>; N: u8, M: u8=5> would be technically valid. Or we could simply mandate that the two stacks of defaults can't refer to each-other at all. This would be back-compat to upgrade into the "no cycles" rule.

Some motivating usage examples might help clarify what default rules we inherit. Personally, I want to be able to do:

struct Stack<T; n: usize>{ 
  data: [T; n],
  len: usize,
}

enum HybridStack<T; n: usize> {
  Heap(Vec<T>), // only if len exceeds n
  Stack(Stack<T, n>),
}

// eventually
enum Hybrid<T, Allocator=GlobalHeap; n: usize> {
  Heap(Vec<T, Allocator>),
  Stack(Stack<T, n>),
}
struct BTree<K, V; B: usize = 6> {
  root: Node<K, V; B>
  len: usize,
}

// alternatively these are all bulk-allocated like they are today
// not clear what the best indirection pattern is
struct Node<K, V; B> {
  keys: [K; 2 * B - 1],
  vals: [V; 2 * B - 1],
  edges: [Box<Node<K, V; B>>, 2 * B],
  len: usize,
  is_leaf: bool,
}

// eventually
struct BTree<K, V, Allocator=GlobalHeap; B: usize = 6> {
  root: Node<K, V, Allocator; B>
  len: usize,
}

So with BTree I have a clear motivating example of wanting defaults to be independently specified (allocator and B are inherently decoupled), but both syntaxes allow that. I don't have any usecase for allowing "cross references" with the ; syntax.

@quantheory
Copy link
Contributor Author

@gankro

My intention was that only Foo<T, 5, _, 5> would be accepted in that example. The reason is that otherwise you could have an item defined as Bar<T=u32, const M: usize, const N: usize>, then used as Bar<3, 5>. But then if you replace the first number with _, you're now defining M to be 5, rather than N. It seems more error prone.

@gnzlbg
Copy link
Contributor

gnzlbg commented Feb 26, 2015

Is it there an implementation of this behind a feature gate?
I would really want to use this already.

@quantheory
Copy link
Contributor Author

@gnzlbg I've wanted to take a crack at it, but I haven't found the time yet. I was at a conference most of last week, and will be on vacation most of next week, so unless someone else wants to give it a try, it'll be a while.

@DanielKeep
Copy link

Just a thought: the {...} exception for expressions is kinda ugly. What about this instead: define const { ... } as a general syntax form that is guaranteed to either evaluate to a compile-time constant value, or fail to compile entirely. So:

// Rather than:
impl Gnarl<i32, {4+4}, u64> for Darl { /* ... */ }
// Instead:
impl Gnarl<i32, const {4+4}, u64> for Darl { /* ... */ }

This is more verbose, but it has the advantage that the same syntax can then be used elsewhere with the same meaning. This could be added in a later RFC to allow programmers to ensure that particular expressions get constant-folded, and later extended to trigger CTFE.

// Require CTFE evaluation:
let x = const { sin(1.3) };

I bring this up because one thing that's always a little dicey is storing an expression in a const solely to guarantee that it actually gets folded at compile time.

@killercup
Copy link
Member

I finally got around to reading this and it was worth it: This is one of the most exciting RFCs I've read so far. I would love to have this functionality in Rust. It always bothered me a bit that arrays get to have length parameters as part of their type but library types could not do this.

I generally like the proposed syntax using const (instead of ;) as well as the const expressions @DanielKeep suggested. It may be more characters to type but it seems more explicit and reads better.

@brson
Copy link
Contributor

brson commented Mar 12, 2015

While this is a feature Rust will likely get at some point, this is a large feature and I'm inclined not to do it now.

@brson brson self-assigned this Mar 12, 2015
@milibopp
Copy link

@brson, would you care to elaborate what motivates your inclination? Maybe such insights could inform improvements to the plans for this feature outlined here.

@quantheory
Copy link
Contributor Author

@brson
I am willing to at least take a crack at implementing this on a branch, but I'm tied up trying to deal with associated consts right now. (I should note that many of the supposedly difficult edge cases in this proposal really do already have to be resolved for interactions between associated consts and array sizes, which is why I'm trying to do that first.)

I will most likely revise this RFC (and even more so, #865) to account for that experience, and to remove the Parameter trait, since it opens up questions about kind polymorphism that we aren't equipped to deal with right now. However, I would prefer this to remain open until then for discussion of syntax and edge cases, if that's OK. I think that if we're talking about a couple of weeks (as opposed to 6 months or a year) it should be alright to leave this open rather than postponing it? There's not a hard rule here.

@brson
Copy link
Contributor

brson commented Apr 6, 2015

@aepsil0n My main reasoning is that this is a deep type system change, and none of the core team is prepared to tackle this problem yet. We're in a mode where we want to gain experience with the language we've built and that has gone through such drastic churn recently.

Postponing this issue. For further planning and discussion see #1038

@brson brson closed this Apr 6, 2015
@botev
Copy link

botev commented Jun 18, 2015

I hope you guys are at least planning to come back to this issue (I hope soon). There are many reasons where you need a copyable struct, which can not be done with Vec, but yet the code reuse paradigm would force you to input some form of Size of some field. I'm currently doing a math library, and instead of resolving the size at compile time, I have to in fact check it dynamically and panic! if you operate with wrong sizes.
Just want to express a support for solving this issue.

@ticki
Copy link
Contributor

ticki commented Dec 7, 2015

See also #1038.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.