Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support Iterators #1895

Closed
41 tasks done
philberty opened this issue Feb 21, 2023 · 2 comments · Fixed by #2604
Closed
41 tasks done

Support Iterators #1895

philberty opened this issue Feb 21, 2023 · 2 comments · Fixed by #2604
Assignees

Comments

@philberty
Copy link
Member

philberty commented Feb 21, 2023

This is a parent issue to track progress on getting for-loops working, which actually uses iterators under the hood.

Task List

Goal-test case:

#![feature(intrinsics)]

pub use option::Option::{self, None, Some};
pub use result::Result::{self, Err, Ok};

mod option {
    pub enum Option<T> {
        None,
        Some(T),
    }
}

mod result {
    pub enum Result<T, E> {
        Ok(T),
        Err(E),
    }
}

#[lang = "sized"]
pub trait Sized {}

#[lang = "clone"]
pub trait Clone: Sized {
    fn clone(&self) -> Self;

    fn clone_from(&mut self, source: &Self) {
        *self = source.clone()
    }
}

mod impls {
    use super::Clone;

    macro_rules! impl_clone {
        ($($t:ty)*) => {
            $(
                impl Clone for $t {
                    fn clone(&self) -> Self {
                        *self
                    }
                }
            )*
        }
    }

    impl_clone! {
        usize u8 u16 u32 u64 // u128
        isize i8 i16 i32 i64 // i128
        f32 f64
        bool char
    }
}

#[lang = "copy"]
pub trait Copy: Clone {
    // Empty.
}

mod copy_impls {
    use super::Copy;

    macro_rules! impl_copy {
        ($($t:ty)*) => {
            $(
                impl Copy for $t {}
            )*
        }
    }

    impl_copy! {
        usize u8 u16 u32 u64 // u128
        isize i8 i16 i32 i64 // i128
        f32 f64
        bool char
    }
}

mod intrinsics {
    extern "rust-intrinsic" {
        pub fn add_with_overflow<T>(x: T, y: T) -> (T, bool);
        pub fn wrapping_add<T>(a: T, b: T) -> T;
        pub fn wrapping_sub<T>(a: T, b: T) -> T;
        pub fn rotate_left<T>(a: T, b: T) -> T;
        pub fn rotate_right<T>(a: T, b: T) -> T;
        pub fn offset<T>(ptr: *const T, count: isize) -> *const T;
        pub fn copy_nonoverlapping<T>(src: *const T, dst: *mut T, count: usize);
        pub fn move_val_init<T>(dst: *mut T, src: T);
        pub fn uninit<T>() -> T;
    }
}

mod ptr {
    #[lang = "const_ptr"]
    impl<T> *const T {
        pub unsafe fn offset(self, count: isize) -> *const T {
            intrinsics::offset(self, count)
        }
    }

    #[lang = "mut_ptr"]
    impl<T> *mut T {
        pub unsafe fn offset(self, count: isize) -> *mut T {
            intrinsics::offset(self, count) as *mut T
        }
    }

    pub unsafe fn swap_nonoverlapping<T>(x: *mut T, y: *mut T, count: usize) {
        let x = x as *mut T;
        let y = y as *mut T;
        let len = mem::size_of::<T>() * count;
        swap_nonoverlapping_bytes(x, y, len)
    }

    pub(crate) unsafe fn swap_nonoverlapping_one<T>(x: *mut T, y: *mut T) {
        // For types smaller than the block optimization below,
        // just swap directly to avoid pessimizing codegen.
        if mem::size_of::<T>() < 32 {
            let z = read(x);
            intrinsics::copy_nonoverlapping(y, x, 1);
            write(y, z);
        } else {
            swap_nonoverlapping(x, y, 1);
        }
    }

    pub unsafe fn write<T>(dst: *mut T, src: T) {
        intrinsics::move_val_init(&mut *dst, src)
    }

    pub unsafe fn read<T>(src: *const T) -> T {
        let mut tmp: T = mem::uninitialized();
        intrinsics::copy_nonoverlapping(src, &mut tmp, 1);
        tmp
    }

    unsafe fn swap_nonoverlapping_bytes(x: *mut u8, y: *mut u8, len: usize) {
        struct Block(u64, u64, u64, u64);
        struct UnalignedBlock(u64, u64, u64, u64);

        let block_size = mem::size_of::<Block>();

        // Loop through x & y, copying them `Block` at a time
        // The optimizer should unroll the loop fully for most types
        // N.B. We can't use a for loop as the `range` impl calls `mem::swap` recursively
        let mut i = 0;
        while i + block_size <= len {
            // Create some uninitialized memory as scratch space
            // Declaring `t` here avoids aligning the stack when this loop is unused
            let mut t: Block = mem::uninitialized();
            let t = &mut t as *mut _ as *mut u8;
            let x = x.offset(i as isize);
            let y = y.offset(i as isize);

            // Swap a block of bytes of x & y, using t as a temporary buffer
            // This should be optimized into efficient SIMD operations where available
            intrinsics::copy_nonoverlapping(x, t, block_size);
            intrinsics::copy_nonoverlapping(y, x, block_size);
            intrinsics::copy_nonoverlapping(t, y, block_size);
            i += block_size;
        }

        if i < len {
            // Swap any remaining bytes
            let mut t: UnalignedBlock = mem::uninitialized();
            let rem = len - i;

            let t = &mut t as *mut _ as *mut u8;
            let x = x.offset(i as isize);
            let y = y.offset(i as isize);

            intrinsics::copy_nonoverlapping(x, t, rem);
            intrinsics::copy_nonoverlapping(y, x, rem);
            intrinsics::copy_nonoverlapping(t, y, rem);
        }
    }
}

mod mem {
    extern "rust-intrinsic" {
        pub fn transmute<T, U>(_: T) -> U;
        pub fn size_of<T>() -> usize;
    }

    pub fn swap<T>(x: &mut T, y: &mut T) {
        unsafe {
            ptr::swap_nonoverlapping_one(x, y);
        }
    }

    pub fn replace<T>(dest: &mut T, mut src: T) -> T {
        swap(dest, &mut src);
        src
    }

    pub unsafe fn uninitialized<T>() -> T {
        intrinsics::uninit()
    }
}

macro_rules! impl_uint {
    ($($ty:ident = $lang:literal),*) => {
        $(
            impl $ty {
                pub fn wrapping_add(self, rhs: Self) -> Self {
                    unsafe {
                        intrinsics::wrapping_add(self, rhs)
                    }
                }

                pub fn wrapping_sub(self, rhs: Self) -> Self {
                    unsafe {
                        intrinsics::wrapping_sub(self, rhs)
                    }
                }

                pub fn rotate_left(self, n: u32) -> Self {
                    unsafe {
                        intrinsics::rotate_left(self, n as Self)
                    }
                }

                pub fn rotate_right(self, n: u32) -> Self {
                    unsafe {
                        intrinsics::rotate_right(self, n as Self)
                    }
                }

                pub fn to_le(self) -> Self {
                    #[cfg(target_endian = "little")]
                    {
                        self
                    }
                }

                pub const fn from_le_bytes(bytes: [u8; mem::size_of::<Self>()]) -> Self {
                    Self::from_le(Self::from_ne_bytes(bytes))
                }

                pub const fn from_le(x: Self) -> Self {
                    #[cfg(target_endian = "little")]
                    {
                        x
                    }
                }

                pub const fn from_ne_bytes(bytes: [u8; mem::size_of::<Self>()]) -> Self {
                    unsafe { mem::transmute(bytes) }
                }

                pub fn checked_add(self, rhs: Self) -> Option<Self> {
                    let (a, b) = self.overflowing_add(rhs);
                    if b {
                        Option::None
                    } else {
                        Option::Some(a)
                    }
                }

                pub fn overflowing_add(self, rhs: Self) -> (Self, bool) {
                    let (a, b) = unsafe { intrinsics::add_with_overflow(self as i32, rhs as i32) };
                    (a as Self, b)
                }
            }
        )*
    }
}

impl_uint!(
    u8 = "u8",
    u16 = "u16",
    u32 = "u32",
    u64 = "u64",
    usize = "usize"
);

#[lang = "add"]
pub trait Add<RHS = Self> {
    type Output;

    fn add(self, rhs: RHS) -> Self::Output;
}
macro_rules! add_impl {
    ($($t:ty)*) => ($(
        impl Add for $t {
            type Output = $t;

            fn add(self, other: $t) -> $t { self + other }
        }
    )*)
}

add_impl! { usize u8 u16 u32 u64  /*isize i8 i16 i32 i64*/  f32 f64 }

#[lang = "sub"]
pub trait Sub<RHS = Self> {
    type Output;

    fn sub(self, rhs: RHS) -> Self::Output;
}
macro_rules! sub_impl {
    ($($t:ty)*) => ($(
        impl Sub for $t {
            type Output = $t;

            fn sub(self, other: $t) -> $t { self - other }
        }
    )*)
}

sub_impl! { usize u8 u16 u32 u64  /*isize i8 i16 i32 i64*/  f32 f64 }

#[lang = "Range"]
pub struct Range<Idx> {
    pub start: Idx,
    pub end: Idx,
}

pub trait TryFrom<T>: Sized {
    /// The type returned in the event of a conversion error.
    type Error;

    /// Performs the conversion.
    fn try_from(value: T) -> Result<Self, Self::Error>;
}

pub trait From<T>: Sized {
    fn from(_: T) -> Self;
}

impl<T> From<T> for T {
    fn from(t: T) -> T {
        t
    }
}

impl<T, U> TryFrom<U> for T
where
    T: From<U>,
{
    type Error = !;

    fn try_from(value: U) -> Result<Self, Self::Error> {
        Ok(T::from(value))
    }
}

trait Step {
    /// Returns the number of steps between two step objects. The count is
    /// inclusive of `start` and exclusive of `end`.
    ///
    /// Returns `None` if it is not possible to calculate `steps_between`
    /// without overflow.
    fn steps_between(start: &Self, end: &Self) -> Option<usize>;

    /// Replaces this step with `1`, returning itself
    fn replace_one(&mut self) -> Self;

    /// Replaces this step with `0`, returning itself
    fn replace_zero(&mut self) -> Self;

    /// Adds one to this step, returning the result
    fn add_one(&self) -> Self;

    /// Subtracts one to this step, returning the result
    fn sub_one(&self) -> Self;

    /// Add an usize, returning None on overflow
    fn add_usize(&self, n: usize) -> Option<Self>;
}

// These are still macro-generated because the integer literals resolve to different types.
macro_rules! step_identical_methods {
    () => {
        #[inline]
        fn replace_one(&mut self) -> Self {
            mem::replace(self, 1)
        }

        #[inline]
        fn replace_zero(&mut self) -> Self {
            mem::replace(self, 0)
        }

        #[inline]
        fn add_one(&self) -> Self {
            //Add::add(*self, 1)
            *self
        }

        #[inline]
        fn sub_one(&self) -> Self {
            // Sub::sub(*self, 1)
            *self
        }
    };
}

macro_rules! step_impl_unsigned {
    ($($t:ty)*) => ($(
        impl Step for $t {
            fn steps_between(start: &$t, end: &$t) -> Option<usize> {
                if *start < *end {
                    // Note: We assume $t <= usize here
                    Option::Some((*end - *start) as usize)
                } else {
                    Option::Some(0)
                }
            }

            fn add_usize(&self, n: usize) -> Option<Self> {
                match <$t>::try_from(n) {
                    Result::Ok(n_as_t) => self.checked_add(n_as_t),
                    Result::Err(_) => Option::None,
                }
            }

            step_identical_methods!();
        }
    )*)
}
macro_rules! step_impl_signed {
    ($( [$t:ty : $unsigned:ty] )*) => ($(
        impl Step for $t {
            #[inline]
            #[allow(trivial_numeric_casts)]
            fn steps_between(start: &$t, end: &$t) -> Option<usize> {
                if *start < *end {
                    // Note: We assume $t <= isize here
                    // Use .wrapping_sub and cast to usize to compute the
                    // difference that may not fit inside the range of isize.
                    Option::Some((*end as isize).wrapping_sub(*start as isize) as usize)
                } else {
                    Option::Some(0)
                }
            }

            #[inline]
            #[allow(unreachable_patterns)]
            fn add_usize(&self, n: usize) -> Option<Self> {
                match <$unsigned>::try_from(n) {
                    Result::Ok(n_as_unsigned) => {
                        // Wrapping in unsigned space handles cases like
                        // `-120_i8.add_usize(200) == Option::Some(80_i8)`,
                        // even though 200_usize is out of range for i8.
                        let wrapped = (*self as $unsigned).wrapping_add(n_as_unsigned) as $t;
                        if wrapped >= *self {
                            Option::Some(wrapped)
                        } else {
                            Option::None  // Addition overflowed
                        }
                    }
                    Result::Err(_) => Option::None,
                }
            }

            step_identical_methods!();
        }
    )*)
}

macro_rules! step_impl_no_between {
    ($($t:ty)*) => ($(
        impl Step for $t {
            #[inline]
            fn steps_between(_start: &Self, _end: &Self) -> Option<usize> {
                Option::None
            }

            #[inline]
            fn add_usize(&self, n: usize) -> Option<Self> {
                self.checked_add(n as $t)
            }

            step_identical_methods!();
        }
    )*)
}

step_impl_unsigned!(usize u8 u16 u32);
// step_impl_signed!([isize: usize][i8: u8][i16: u16][i32: u32]);
#[cfg(target_pointer_width = "64")]
step_impl_unsigned!(u64);
#[cfg(target_pointer_width = "64")]
// step_impl_signed!([i64: u64]);
// If the target pointer width is not 64-bits, we
// assume here that it is less than 64-bits.
#[cfg(not(target_pointer_width = "64"))]
step_impl_no_between!(u64 i64);
// step_impl_no_between!(u128 i128);

pub trait Iterator {
    type Item;

    fn next(&mut self) -> Option<Self::Item>;
}

impl<A: Step> Iterator for Range<A> {
    type Item = A;

    fn next(&mut self) -> Option<A> {
        if self.start < self.end {
            // We check for overflow here, even though it can't actually
            // happen. Adding this check does however help llvm vectorize loops
            // for some ranges that don't get vectorized otherwise,
            // and this won't actually result in an extra check in an optimized build.
            match self.start.add_usize(1) {
                Option::Some(mut n) => {
                    mem::swap(&mut n, &mut self.start);
                    Option::Some(n)
                }
                Option::None => Option::None,
            }
        } else {
            Option::None
        }
    }
}

pub trait IntoIterator {
    type Item;

    type IntoIter: Iterator<Item = Self::Item>;

    fn into_iter(self) -> Self::IntoIter;
}

impl<I: Iterator> IntoIterator for I {
    type Item = I::Item;
    type IntoIter = I;

    fn into_iter(self) -> I {
        self
    }
}

fn main() {}
@philberty philberty added this to the Final upstream patches milestone Feb 21, 2023
@philberty philberty self-assigned this Feb 21, 2023
@github-project-automation github-project-automation bot moved this to Additional sprint items in libcore 1.49 Feb 21, 2023
@philberty
Copy link
Member Author

Updated the test case here i think I have mised up int impls from rust 1.29 and rust 1.49 so need to spent more time exracting code from libcore

@CohenArthur CohenArthur moved this from Additional sprint items to Todo in libcore 1.49 Apr 18, 2023
@philberty
Copy link
Member Author

#![feature(intrinsics)]

pub use option::Option::{self, None, Some};
pub use result::Result::{self, Err, Ok};

mod option {
    pub enum Option<T> {
        None,
        Some(T),
    }
}

mod result {
    pub enum Result<T, E> {
        Ok(T),
        Err(E),
    }
}

#[lang = "sized"]
pub trait Sized {}

#[lang = "clone"]
pub trait Clone: Sized {
    fn clone(&self) -> Self;

    fn clone_from(&mut self, source: &Self) {
        *self = source.clone()
    }
}

mod impls {
    use super::Clone;

    macro_rules! impl_clone {
        ($($t:ty)*) => {
            $(
                impl Clone for $t {
                    fn clone(&self) -> Self {
                        *self
                    }
                }
            )*
        }
    }

    impl_clone! {
        usize u8 u16 u32 u64 // u128
        isize i8 i16 i32 i64 // i128
        f32 f64
        bool char
    }
}

#[lang = "copy"]
pub trait Copy: Clone {
    // Empty.
}

mod copy_impls {
    use super::Copy;

    macro_rules! impl_copy {
        ($($t:ty)*) => {
            $(
                impl Copy for $t {}
            )*
        }
    }

    impl_copy! {
        usize u8 u16 u32 u64 // u128
        isize i8 i16 i32 i64 // i128
        f32 f64
        bool char
    }
}

mod intrinsics {
    extern "rust-intrinsic" {
        pub fn add_with_overflow<T>(x: T, y: T) -> (T, bool);
        pub fn wrapping_add<T>(a: T, b: T) -> T;
        pub fn wrapping_sub<T>(a: T, b: T) -> T;
        pub fn rotate_left<T>(a: T, b: T) -> T;
        pub fn rotate_right<T>(a: T, b: T) -> T;
        pub fn offset<T>(ptr: *const T, count: isize) -> *const T;
        pub fn copy_nonoverlapping<T>(src: *const T, dst: *mut T, count: usize);
        pub fn move_val_init<T>(dst: *mut T, src: T);
        pub fn uninit<T>() -> T;
    }
}

mod ptr {
    #[lang = "const_ptr"]
    impl<T> *const T {
        pub unsafe fn offset(self, count: isize) -> *const T {
            intrinsics::offset(self, count)
        }
    }

    #[lang = "mut_ptr"]
    impl<T> *mut T {
        pub unsafe fn offset(self, count: isize) -> *mut T {
            intrinsics::offset(self, count) as *mut T
        }
    }

    pub unsafe fn swap_nonoverlapping<T>(x: *mut T, y: *mut T, count: usize) {
        let x = x as *mut T;
        let y = y as *mut T;
        let len = mem::size_of::<T>() * count;
        swap_nonoverlapping_bytes(x, y, len)
    }

    pub(crate) unsafe fn swap_nonoverlapping_one<T>(x: *mut T, y: *mut T) {
        // For types smaller than the block optimization below,
        // just swap directly to avoid pessimizing codegen.
        if mem::size_of::<T>() < 32 {
            let z = read(x);
            intrinsics::copy_nonoverlapping(y, x, 1);
            write(y, z);
        } else {
            swap_nonoverlapping(x, y, 1);
        }
    }

    pub unsafe fn write<T>(dst: *mut T, src: T) {
        intrinsics::move_val_init(&mut *dst, src)
    }

    pub unsafe fn read<T>(src: *const T) -> T {
        let mut tmp: T = mem::uninitialized();
        intrinsics::copy_nonoverlapping(src, &mut tmp, 1);
        tmp
    }

    unsafe fn swap_nonoverlapping_bytes(x: *mut u8, y: *mut u8, len: usize) {
        struct Block(u64, u64, u64, u64);
        struct UnalignedBlock(u64, u64, u64, u64);

        let block_size = mem::size_of::<Block>();

        // Loop through x & y, copying them `Block` at a time
        // The optimizer should unroll the loop fully for most types
        // N.B. We can't use a for loop as the `range` impl calls `mem::swap` recursively
        let mut i: usize = 0;
        while i + block_size <= len {
            // Create some uninitialized memory as scratch space
            // Declaring `t` here avoids aligning the stack when this loop is unused
            let mut t: Block = mem::uninitialized();
            let t = &mut t as *mut _ as *mut u8;
            let x = x.offset(i as isize);
            let y = y.offset(i as isize);

            // Swap a block of bytes of x & y, using t as a temporary buffer
            // This should be optimized into efficient SIMD operations where available
            intrinsics::copy_nonoverlapping(x, t, block_size);
            intrinsics::copy_nonoverlapping(y, x, block_size);
            intrinsics::copy_nonoverlapping(t, y, block_size);
            i += block_size;
        }

        if i < len {
            // Swap any remaining bytes
            let mut t: UnalignedBlock = mem::uninitialized();
            let rem = len - i;

            let t = &mut t as *mut _ as *mut u8;
            let x = x.offset(i as isize);
            let y = y.offset(i as isize);

            intrinsics::copy_nonoverlapping(x, t, rem);
            intrinsics::copy_nonoverlapping(y, x, rem);
            intrinsics::copy_nonoverlapping(t, y, rem);
        }
    }
}

mod mem {
    extern "rust-intrinsic" {
        pub fn transmute<T, U>(_: T) -> U;
        pub fn size_of<T>() -> usize;
    }

    pub fn swap<T>(x: &mut T, y: &mut T) {
        unsafe {
            ptr::swap_nonoverlapping_one(x, y);
        }
    }

    pub fn replace<T>(dest: &mut T, mut src: T) -> T {
        swap(dest, &mut src);
        src
    }

    pub unsafe fn uninitialized<T>() -> T {
        intrinsics::uninit()
    }
}

macro_rules! impl_uint {
    ($($ty:ident = $lang:literal),*) => {
        $(
            impl $ty {
                pub fn wrapping_add(self, rhs: Self) -> Self {
                    unsafe {
                        intrinsics::wrapping_add(self, rhs)
                    }
                }

                pub fn wrapping_sub(self, rhs: Self) -> Self {
                    unsafe {
                        intrinsics::wrapping_sub(self, rhs)
                    }
                }

                pub fn rotate_left(self, n: u32) -> Self {
                    unsafe {
                        intrinsics::rotate_left(self, n as Self)
                    }
                }

                pub fn rotate_right(self, n: u32) -> Self {
                    unsafe {
                        intrinsics::rotate_right(self, n as Self)
                    }
                }

                pub fn to_le(self) -> Self {
                    #[cfg(target_endian = "little")]
                    {
                        self
                    }
                }

                pub const fn from_le_bytes(bytes: [u8; mem::size_of::<Self>()]) -> Self {
                    Self::from_le(Self::from_ne_bytes(bytes))
                }

                pub const fn from_le(x: Self) -> Self {
                    #[cfg(target_endian = "little")]
                    {
                        x
                    }
                }

                pub const fn from_ne_bytes(bytes: [u8; mem::size_of::<Self>()]) -> Self {
                    unsafe { mem::transmute(bytes) }
                }

                pub fn checked_add(self, rhs: Self) -> Option<Self> {
                    let (a, b) = self.overflowing_add(rhs);
                    if b {
                        Option::None
                    } else {
                        Option::Some(a)
                    }
                }

                pub fn overflowing_add(self, rhs: Self) -> (Self, bool) {
                    let (a, b) = unsafe { intrinsics::add_with_overflow(self as i32, rhs as i32) };
                    (a as Self, b)
                }
            }
        )*
    }
}

impl_uint!(
    u8 = "u8",
    u16 = "u16",
    u32 = "u32",
    u64 = "u64",
    usize = "usize"
);

#[lang = "add"]
pub trait Add<RHS = Self> {
    type Output;

    fn add(self, rhs: RHS) -> Self::Output;
}
macro_rules! add_impl {
    ($($t:ty)*) => ($(
        impl Add for $t {
            type Output = $t;

            fn add(self, other: $t) -> $t { self + other }
        }
    )*)
}

add_impl! { usize u8 u16 u32 u64  /*isize i8 i16 i32 i64*/  f32 f64 }

#[lang = "sub"]
pub trait Sub<RHS = Self> {
    type Output;

    fn sub(self, rhs: RHS) -> Self::Output;
}
macro_rules! sub_impl {
    ($($t:ty)*) => ($(
        impl Sub for $t {
            type Output = $t;

            fn sub(self, other: $t) -> $t { self - other }
        }
    )*)
}

sub_impl! { usize u8 u16 u32 u64  /*isize i8 i16 i32 i64*/  f32 f64 }

#[lang = "Range"]
pub struct Range<Idx> {
    pub start: Idx,
    pub end: Idx,
}

pub trait TryFrom<T>: Sized {
    /// The type returned in the event of a conversion error.
    type Error;

    /// Performs the conversion.
    fn try_from(value: T) -> Result<Self, Self::Error>;
}

pub trait From<T>: Sized {
    fn from(_: T) -> Self;
}

impl<T> From<T> for T {
    fn from(t: T) -> T {
        t
    }
}

impl<T, U> TryFrom<U> for T
where
    T: From<U>,
{
    type Error = !;

    fn try_from(value: U) -> Result<Self, Self::Error> {
        Ok(T::from(value))
    }
}

trait Step {
    /// Returns the number of steps between two step objects. The count is
    /// inclusive of `start` and exclusive of `end`.
    ///
    /// Returns `None` if it is not possible to calculate `steps_between`
    /// without overflow.
    fn steps_between(start: &Self, end: &Self) -> Option<usize>;

    /// Replaces this step with `1`, returning itself
    fn replace_one(&mut self) -> Self;

    /// Replaces this step with `0`, returning itself
    fn replace_zero(&mut self) -> Self;

    /// Adds one to this step, returning the result
    fn add_one(&self) -> Self;

    /// Subtracts one to this step, returning the result
    fn sub_one(&self) -> Self;

    /// Add an usize, returning None on overflow
    fn add_usize(&self, n: usize) -> Option<Self>;
}

// These are still macro-generated because the integer literals resolve to different types.
macro_rules! step_identical_methods {
    () => {
        #[inline]
        fn replace_one(&mut self) -> Self {
            mem::replace(self, 1)
        }

        #[inline]
        fn replace_zero(&mut self) -> Self {
            mem::replace(self, 0)
        }

        #[inline]
        fn add_one(&self) -> Self {
            //Add::add(*self, 1)
            *self
        }

        #[inline]
        fn sub_one(&self) -> Self {
            // Sub::sub(*self, 1)
            *self
        }
    };
}

macro_rules! step_impl_unsigned {
    ($($t:ty)*) => ($(
        impl Step for $t {
            fn steps_between(start: &$t, end: &$t) -> Option<usize> {
                if *start < *end {
                    // Note: We assume $t <= usize here
                    Option::Some((*end - *start) as usize)
                } else {
                    Option::Some(0)
                }
            }

            fn add_usize(&self, n: usize) -> Option<Self> {
                match <$t>::try_from(n) {
                    Result::Ok(n_as_t) => self.checked_add(n_as_t),
                    Result::Err(_) => Option::None,
                }
            }

            step_identical_methods!();
        }
    )*)
}
macro_rules! step_impl_signed {
    ($( [$t:ty : $unsigned:ty] )*) => ($(
        impl Step for $t {
            #[inline]
            #[allow(trivial_numeric_casts)]
            fn steps_between(start: &$t, end: &$t) -> Option<usize> {
                if *start < *end {
                    // Note: We assume $t <= isize here
                    // Use .wrapping_sub and cast to usize to compute the
                    // difference that may not fit inside the range of isize.
                    Option::Some((*end as isize).wrapping_sub(*start as isize) as usize)
                } else {
                    Option::Some(0)
                }
            }

            #[inline]
            #[allow(unreachable_patterns)]
            fn add_usize(&self, n: usize) -> Option<Self> {
                match <$unsigned>::try_from(n) {
                    Result::Ok(n_as_unsigned) => {
                        // Wrapping in unsigned space handles cases like
                        // `-120_i8.add_usize(200) == Option::Some(80_i8)`,
                        // even though 200_usize is out of range for i8.
                        let wrapped = (*self as $unsigned).wrapping_add(n_as_unsigned) as $t;
                        if wrapped >= *self {
                            Option::Some(wrapped)
                        } else {
                            Option::None  // Addition overflowed
                        }
                    }
                    Result::Err(_) => Option::None,
                }
            }

            step_identical_methods!();
        }
    )*)
}

macro_rules! step_impl_no_between {
    ($($t:ty)*) => ($(
        impl Step for $t {
            #[inline]
            fn steps_between(_start: &Self, _end: &Self) -> Option<usize> {
                Option::None
            }

            #[inline]
            fn add_usize(&self, n: usize) -> Option<Self> {
                self.checked_add(n as $t)
            }

            step_identical_methods!();
        }
    )*)
}

step_impl_unsigned!(usize);

pub trait Iterator {
    type Item;

    fn next(&mut self) -> Option<Self::Item>;
}

impl<A: Step> Iterator for Range<A> {
    type Item = A;

    fn next(&mut self) -> Option<A> {
        if self.start < self.end {
            // We check for overflow here, even though it can't actually
            // happen. Adding this check does however help llvm vectorize loops
            // for some ranges that don't get vectorized otherwise,
            // and this won't actually result in an extra check in an optimized build.
            match self.start.add_usize(1) {
                Option::Some(mut n) => {
                    mem::swap(&mut n, &mut self.start);
                    Option::Some(n)
                }
                Option::None => Option::None,
            }
        } else {
            Option::None
        }
    }
}

pub trait IntoIterator {
    type Item;

    type IntoIter: Iterator<Item = Self::Item>;

    fn into_iter(self) -> Self::IntoIter;
}

impl<I: Iterator> IntoIterator for I {
    type Item = I::Item;
    type IntoIter = I;

    fn into_iter(self) -> I {
        self
    }
}

fn main() {}

philberty added a commit that referenced this issue Aug 12, 2023
We hit an assertion with range based iterators here. This code was used
to solve complex generics such as:

  struct Foo<X,Y>(X,Y);
  impl<T> Foo<T, i32> {
    fn test<Y>(self, a: Y) { }
  }

The impl item will have the signiture of:

  fn test<T,Y> (Foo<T, i32> self, a:Y)

So in the case where we have:

  let a = Foo(123f32, 456);
  a.test<bool>(true);

We need to solve the generic argument T from the impl block by infering the
arguments there and applying them so that when we apply the generic
argument bool we dont end up in the case of missing number of generics.

Addresses #1895

gcc/rust/ChangeLog:

	* typecheck/rust-hir-type-check-expr.cc (TypeCheckExpr::visit): remove hack

Signed-off-by: Philip Herron <[email protected]>
philberty added a commit that referenced this issue Aug 12, 2023
We hit an assertion with range based iterators here. This code was used
to solve complex generics such as:

  struct Foo<X,Y>(X,Y);
  impl<T> Foo<T, i32> {
    fn test<Y>(self, a: Y) { }
  }

The impl item will have the signiture of:

  fn test<T,Y> (Foo<T, i32> self, a:Y)

So in the case where we have:

  let a = Foo(123f32, 456);
  a.test<bool>(true);

We need to solve the generic argument T from the impl block by infering the
arguments there and applying them so that when we apply the generic
argument bool we dont end up in the case of missing number of generics.

Addresses #1895

gcc/rust/ChangeLog:

	* typecheck/rust-hir-type-check-expr.cc (TypeCheckExpr::visit): remove hack

Signed-off-by: Philip Herron <[email protected]>
philberty added a commit that referenced this issue Aug 12, 2023
We do extra checking after the fact here to ensure its a valid candidate
and in the case there is only one candidate lets just go for it.

Addresses #1895

gcc/rust/ChangeLog:

	* backend/rust-compile-base.cc (HIRCompileBase::resolve_method_address):
	use the single candidate

Signed-off-by: Philip Herron <[email protected]>
philberty added a commit that referenced this issue Aug 12, 2023
We can endup with duplicate symbol names for different intrinsics with our
current hash setup. This adds in the mappings and extra info to improve
hash uniqueness.

Addresses #1895

gcc/rust/ChangeLog:

	* backend/rust-compile-intrinsic.cc (check_for_cached_intrinsic):
	simplify this cached intrinsic check
	* backend/rust-mangle.cc (legacy_mangle_item): use new interface
	* typecheck/rust-tyty.h: new managle helper

Signed-off-by: Philip Herron <[email protected]>
philberty added a commit that referenced this issue Aug 12, 2023
We can endup with duplicate symbol names for different intrinsics with our
current hash setup. This adds in the mappings and extra info to improve
hash uniqueness.

Addresses #1895

gcc/rust/ChangeLog:

	* backend/rust-compile-intrinsic.cc (check_for_cached_intrinsic):
	simplify this cached intrinsic check
	* backend/rust-mangle.cc (legacy_mangle_item): use new interface
	* typecheck/rust-tyty.h: new managle helper

Signed-off-by: Philip Herron <[email protected]>
github-merge-queue bot pushed a commit that referenced this issue Aug 12, 2023
We hit an assertion with range based iterators here. This code was used
to solve complex generics such as:

  struct Foo<X,Y>(X,Y);
  impl<T> Foo<T, i32> {
    fn test<Y>(self, a: Y) { }
  }

The impl item will have the signiture of:

  fn test<T,Y> (Foo<T, i32> self, a:Y)

So in the case where we have:

  let a = Foo(123f32, 456);
  a.test<bool>(true);

We need to solve the generic argument T from the impl block by infering the
arguments there and applying them so that when we apply the generic
argument bool we dont end up in the case of missing number of generics.

Addresses #1895

gcc/rust/ChangeLog:

	* typecheck/rust-hir-type-check-expr.cc (TypeCheckExpr::visit): remove hack

Signed-off-by: Philip Herron <[email protected]>
github-merge-queue bot pushed a commit that referenced this issue Aug 12, 2023
We do extra checking after the fact here to ensure its a valid candidate
and in the case there is only one candidate lets just go for it.

Addresses #1895

gcc/rust/ChangeLog:

	* backend/rust-compile-base.cc (HIRCompileBase::resolve_method_address):
	use the single candidate

Signed-off-by: Philip Herron <[email protected]>
github-merge-queue bot pushed a commit that referenced this issue Aug 12, 2023
We can endup with duplicate symbol names for different intrinsics with our
current hash setup. This adds in the mappings and extra info to improve
hash uniqueness.

Addresses #1895

gcc/rust/ChangeLog:

	* backend/rust-compile-intrinsic.cc (check_for_cached_intrinsic):
	simplify this cached intrinsic check
	* backend/rust-mangle.cc (legacy_mangle_item): use new interface
	* typecheck/rust-tyty.h: new managle helper

Signed-off-by: Philip Herron <[email protected]>
philberty added a commit that referenced this issue Aug 21, 2023
There is a case where some generic types are holding onto inference
variable pointers directly. So this gives the backend a chance to do one
final lookup to resolve the type.

This now allows us to compile a full test case for iterators but there is
still one miscompilation in here which results in a segv on O2 and bad
result on -O0.

Addresses #1895

gcc/rust/ChangeLog:

	* backend/rust-compile-type.cc (TyTyResolveCompile::visit): do a final lookup

gcc/testsuite/ChangeLog:

	* rust/compile/iterators1.rs: New test.

Signed-off-by: Philip Herron <[email protected]>
github-merge-queue bot pushed a commit that referenced this issue Aug 23, 2023
There is a case where some generic types are holding onto inference
variable pointers directly. So this gives the backend a chance to do one
final lookup to resolve the type.

This now allows us to compile a full test case for iterators but there is
still one miscompilation in here which results in a segv on O2 and bad
result on -O0.

Addresses #1895

gcc/rust/ChangeLog:

	* backend/rust-compile-type.cc (TyTyResolveCompile::visit): do a final lookup

gcc/testsuite/ChangeLog:

	* rust/compile/iterators1.rs: New test.

Signed-off-by: Philip Herron <[email protected]>
philberty added a commit that referenced this issue Aug 31, 2023
The overflow intrinsic returns a tuple of (value, boolean) where it value
is the operator result and boolean if it overflowed or not. The intrinsic
here did not initilize the resulting tuple and therefore was creating a use
before init error resulting in garbage results

Addresses #1895

gcc/rust/ChangeLog:

	* backend/rust-compile-intrinsic.cc (op_with_overflow_inner): fix use before init

Signed-off-by: Philip Herron <[email protected]>
philberty added a commit that referenced this issue Aug 31, 2023
Ensure the uninit intrinsic does not get optimized away

Addresses #1895

gcc/rust/ChangeLog:

	* backend/rust-compile-intrinsic.cc (uninit_handler): Update fndecl attributes

Signed-off-by: Philip Herron <[email protected]>
philberty added a commit that referenced this issue Aug 31, 2023
The intrinsic move_val_init was being optimized away even at -O0 because
the function looked "pure" but this adds in the attributes to enforce that
this function has side-effects to override that bad assumption by the
middle-end.

Addresses #1895

gcc/rust/ChangeLog:

	* backend/rust-compile-intrinsic.cc (move_val_init_handler): mark as side-effects

Signed-off-by: Philip Herron <[email protected]>
github-merge-queue bot pushed a commit that referenced this issue Aug 31, 2023
The overflow intrinsic returns a tuple of (value, boolean) where it value
is the operator result and boolean if it overflowed or not. The intrinsic
here did not initilize the resulting tuple and therefore was creating a use
before init error resulting in garbage results

Addresses #1895

gcc/rust/ChangeLog:

	* backend/rust-compile-intrinsic.cc (op_with_overflow_inner): fix use before init

Signed-off-by: Philip Herron <[email protected]>
github-merge-queue bot pushed a commit that referenced this issue Aug 31, 2023
Ensure the uninit intrinsic does not get optimized away

Addresses #1895

gcc/rust/ChangeLog:

	* backend/rust-compile-intrinsic.cc (uninit_handler): Update fndecl attributes

Signed-off-by: Philip Herron <[email protected]>
CohenArthur pushed a commit to CohenArthur/gccrs that referenced this issue Jan 12, 2024
We can endup with duplicate symbol names for different intrinsics with our
current hash setup. This adds in the mappings and extra info to improve
hash uniqueness.

Addresses Rust-GCC#1895

gcc/rust/ChangeLog:

	* backend/rust-compile-intrinsic.cc (check_for_cached_intrinsic):
	simplify this cached intrinsic check
	* backend/rust-mangle.cc (legacy_mangle_item): use new interface
	* typecheck/rust-tyty.h: new managle helper

Signed-off-by: Philip Herron <[email protected]>
CohenArthur pushed a commit to CohenArthur/gccrs that referenced this issue Jan 12, 2024
There is a case where some generic types are holding onto inference
variable pointers directly. So this gives the backend a chance to do one
final lookup to resolve the type.

This now allows us to compile a full test case for iterators but there is
still one miscompilation in here which results in a segv on O2 and bad
result on -O0.

Addresses Rust-GCC#1895

gcc/rust/ChangeLog:

	* backend/rust-compile-type.cc (TyTyResolveCompile::visit): do a final lookup

gcc/testsuite/ChangeLog:

	* rust/compile/iterators1.rs: New test.

Signed-off-by: Philip Herron <[email protected]>
CohenArthur pushed a commit to CohenArthur/gccrs that referenced this issue Jan 12, 2024
The overflow intrinsic returns a tuple of (value, boolean) where it value
is the operator result and boolean if it overflowed or not. The intrinsic
here did not initilize the resulting tuple and therefore was creating a use
before init error resulting in garbage results

Addresses Rust-GCC#1895

gcc/rust/ChangeLog:

	* backend/rust-compile-intrinsic.cc (op_with_overflow_inner): fix use before init

Signed-off-by: Philip Herron <[email protected]>
CohenArthur pushed a commit to CohenArthur/gccrs that referenced this issue Jan 12, 2024
Ensure the uninit intrinsic does not get optimized away

Addresses Rust-GCC#1895

gcc/rust/ChangeLog:

	* backend/rust-compile-intrinsic.cc (uninit_handler): Update fndecl attributes

Signed-off-by: Philip Herron <[email protected]>
CohenArthur pushed a commit to CohenArthur/gccrs that referenced this issue Jan 12, 2024
The intrinsic move_val_init was being optimized away even at -O0 because
the function looked "pure" but this adds in the attributes to enforce that
this function has side-effects to override that bad assumption by the
middle-end.

Addresses Rust-GCC#1895

gcc/rust/ChangeLog:

	* backend/rust-compile-intrinsic.cc (move_val_init_handler): mark as side-effects

Signed-off-by: Philip Herron <[email protected]>
CohenArthur pushed a commit to CohenArthur/gccrs that referenced this issue Jan 12, 2024
We were massing the match scruitinee expression as a way to access the
result of the expression. This is wrong and needs to be stored in a
temporary otherwise it will cause the code to be regnerated for each time
it is used. This is not an issue in the case where the expression is only
used once.

Fixes Rust-GCC#1895

gcc/rust/ChangeLog:

	* backend/rust-compile-expr.cc (CompileExpr::visit): use a temp for the value

gcc/testsuite/ChangeLog:

	* rust/execute/torture/iter1.rs: New test.

Signed-off-by: Philip Herron <[email protected]>
CohenArthur pushed a commit to CohenArthur/gccrs that referenced this issue Jan 16, 2024
We hit an assertion with range based iterators here. This code was used
to solve complex generics such as:

  struct Foo<X,Y>(X,Y);
  impl<T> Foo<T, i32> {
    fn test<Y>(self, a: Y) { }
  }

The impl item will have the signiture of:

  fn test<T,Y> (Foo<T, i32> self, a:Y)

So in the case where we have:

  let a = Foo(123f32, 456);
  a.test<bool>(true);

We need to solve the generic argument T from the impl block by infering the
arguments there and applying them so that when we apply the generic
argument bool we dont end up in the case of missing number of generics.

Addresses Rust-GCC#1895

gcc/rust/ChangeLog:

	* typecheck/rust-hir-type-check-expr.cc (TypeCheckExpr::visit): remove hack

Signed-off-by: Philip Herron <[email protected]>
CohenArthur pushed a commit to CohenArthur/gccrs that referenced this issue Jan 16, 2024
We do extra checking after the fact here to ensure its a valid candidate
and in the case there is only one candidate lets just go for it.

Addresses Rust-GCC#1895

gcc/rust/ChangeLog:

	* backend/rust-compile-base.cc (HIRCompileBase::resolve_method_address):
	use the single candidate

Signed-off-by: Philip Herron <[email protected]>
CohenArthur pushed a commit to CohenArthur/gccrs that referenced this issue Jan 16, 2024
We can endup with duplicate symbol names for different intrinsics with our
current hash setup. This adds in the mappings and extra info to improve
hash uniqueness.

Addresses Rust-GCC#1895

gcc/rust/ChangeLog:

	* backend/rust-compile-intrinsic.cc (check_for_cached_intrinsic):
	simplify this cached intrinsic check
	* backend/rust-mangle.cc (legacy_mangle_item): use new interface
	* typecheck/rust-tyty.h: new managle helper

Signed-off-by: Philip Herron <[email protected]>
CohenArthur pushed a commit to CohenArthur/gccrs that referenced this issue Jan 16, 2024
There is a case where some generic types are holding onto inference
variable pointers directly. So this gives the backend a chance to do one
final lookup to resolve the type.

This now allows us to compile a full test case for iterators but there is
still one miscompilation in here which results in a segv on O2 and bad
result on -O0.

Addresses Rust-GCC#1895

gcc/rust/ChangeLog:

	* backend/rust-compile-type.cc (TyTyResolveCompile::visit): do a final lookup

gcc/testsuite/ChangeLog:

	* rust/compile/iterators1.rs: New test.

Signed-off-by: Philip Herron <[email protected]>
CohenArthur pushed a commit to CohenArthur/gccrs that referenced this issue Jan 16, 2024
The overflow intrinsic returns a tuple of (value, boolean) where it value
is the operator result and boolean if it overflowed or not. The intrinsic
here did not initilize the resulting tuple and therefore was creating a use
before init error resulting in garbage results

Addresses Rust-GCC#1895

gcc/rust/ChangeLog:

	* backend/rust-compile-intrinsic.cc (op_with_overflow_inner): fix use before init

Signed-off-by: Philip Herron <[email protected]>
CohenArthur pushed a commit to CohenArthur/gccrs that referenced this issue Jan 16, 2024
Ensure the uninit intrinsic does not get optimized away

Addresses Rust-GCC#1895

gcc/rust/ChangeLog:

	* backend/rust-compile-intrinsic.cc (uninit_handler): Update fndecl attributes

Signed-off-by: Philip Herron <[email protected]>
CohenArthur pushed a commit to CohenArthur/gccrs that referenced this issue Jan 16, 2024
The intrinsic move_val_init was being optimized away even at -O0 because
the function looked "pure" but this adds in the attributes to enforce that
this function has side-effects to override that bad assumption by the
middle-end.

Addresses Rust-GCC#1895

gcc/rust/ChangeLog:

	* backend/rust-compile-intrinsic.cc (move_val_init_handler): mark as side-effects

Signed-off-by: Philip Herron <[email protected]>
CohenArthur pushed a commit to CohenArthur/gccrs that referenced this issue Jan 16, 2024
We were massing the match scruitinee expression as a way to access the
result of the expression. This is wrong and needs to be stored in a
temporary otherwise it will cause the code to be regnerated for each time
it is used. This is not an issue in the case where the expression is only
used once.

Fixes Rust-GCC#1895

gcc/rust/ChangeLog:

	* backend/rust-compile-expr.cc (CompileExpr::visit): use a temp for the value

gcc/testsuite/ChangeLog:

	* rust/execute/torture/iter1.rs: New test.

Signed-off-by: Philip Herron <[email protected]>
CohenArthur pushed a commit to CohenArthur/gccrs that referenced this issue Jan 16, 2024
We hit an assertion with range based iterators here. This code was used
to solve complex generics such as:

  struct Foo<X,Y>(X,Y);
  impl<T> Foo<T, i32> {
    fn test<Y>(self, a: Y) { }
  }

The impl item will have the signiture of:

  fn test<T,Y> (Foo<T, i32> self, a:Y)

So in the case where we have:

  let a = Foo(123f32, 456);
  a.test<bool>(true);

We need to solve the generic argument T from the impl block by infering the
arguments there and applying them so that when we apply the generic
argument bool we dont end up in the case of missing number of generics.

Addresses Rust-GCC#1895

gcc/rust/ChangeLog:

	* typecheck/rust-hir-type-check-expr.cc (TypeCheckExpr::visit): remove hack

Signed-off-by: Philip Herron <[email protected]>
CohenArthur pushed a commit to CohenArthur/gccrs that referenced this issue Jan 16, 2024
We do extra checking after the fact here to ensure its a valid candidate
and in the case there is only one candidate lets just go for it.

Addresses Rust-GCC#1895

gcc/rust/ChangeLog:

	* backend/rust-compile-base.cc (HIRCompileBase::resolve_method_address):
	use the single candidate

Signed-off-by: Philip Herron <[email protected]>
CohenArthur pushed a commit to CohenArthur/gccrs that referenced this issue Jan 16, 2024
We can endup with duplicate symbol names for different intrinsics with our
current hash setup. This adds in the mappings and extra info to improve
hash uniqueness.

Addresses Rust-GCC#1895

gcc/rust/ChangeLog:

	* backend/rust-compile-intrinsic.cc (check_for_cached_intrinsic):
	simplify this cached intrinsic check
	* backend/rust-mangle.cc (legacy_mangle_item): use new interface
	* typecheck/rust-tyty.h: new managle helper

Signed-off-by: Philip Herron <[email protected]>
CohenArthur pushed a commit to CohenArthur/gccrs that referenced this issue Jan 16, 2024
There is a case where some generic types are holding onto inference
variable pointers directly. So this gives the backend a chance to do one
final lookup to resolve the type.

This now allows us to compile a full test case for iterators but there is
still one miscompilation in here which results in a segv on O2 and bad
result on -O0.

Addresses Rust-GCC#1895

gcc/rust/ChangeLog:

	* backend/rust-compile-type.cc (TyTyResolveCompile::visit): do a final lookup

gcc/testsuite/ChangeLog:

	* rust/compile/iterators1.rs: New test.

Signed-off-by: Philip Herron <[email protected]>
CohenArthur pushed a commit to CohenArthur/gccrs that referenced this issue Jan 16, 2024
The overflow intrinsic returns a tuple of (value, boolean) where it value
is the operator result and boolean if it overflowed or not. The intrinsic
here did not initilize the resulting tuple and therefore was creating a use
before init error resulting in garbage results

Addresses Rust-GCC#1895

gcc/rust/ChangeLog:

	* backend/rust-compile-intrinsic.cc (op_with_overflow_inner): fix use before init

Signed-off-by: Philip Herron <[email protected]>
CohenArthur pushed a commit to CohenArthur/gccrs that referenced this issue Jan 16, 2024
Ensure the uninit intrinsic does not get optimized away

Addresses Rust-GCC#1895

gcc/rust/ChangeLog:

	* backend/rust-compile-intrinsic.cc (uninit_handler): Update fndecl attributes

Signed-off-by: Philip Herron <[email protected]>
CohenArthur pushed a commit to CohenArthur/gccrs that referenced this issue Jan 16, 2024
The intrinsic move_val_init was being optimized away even at -O0 because
the function looked "pure" but this adds in the attributes to enforce that
this function has side-effects to override that bad assumption by the
middle-end.

Addresses Rust-GCC#1895

gcc/rust/ChangeLog:

	* backend/rust-compile-intrinsic.cc (move_val_init_handler): mark as side-effects

Signed-off-by: Philip Herron <[email protected]>
CohenArthur pushed a commit to CohenArthur/gccrs that referenced this issue Jan 16, 2024
We were massing the match scruitinee expression as a way to access the
result of the expression. This is wrong and needs to be stored in a
temporary otherwise it will cause the code to be regnerated for each time
it is used. This is not an issue in the case where the expression is only
used once.

Fixes Rust-GCC#1895

gcc/rust/ChangeLog:

	* backend/rust-compile-expr.cc (CompileExpr::visit): use a temp for the value

gcc/testsuite/ChangeLog:

	* rust/execute/torture/iter1.rs: New test.

Signed-off-by: Philip Herron <[email protected]>
CohenArthur pushed a commit to CohenArthur/gccrs that referenced this issue Jan 17, 2024
We hit an assertion with range based iterators here. This code was used
to solve complex generics such as:

  struct Foo<X,Y>(X,Y);
  impl<T> Foo<T, i32> {
    fn test<Y>(self, a: Y) { }
  }

The impl item will have the signiture of:

  fn test<T,Y> (Foo<T, i32> self, a:Y)

So in the case where we have:

  let a = Foo(123f32, 456);
  a.test<bool>(true);

We need to solve the generic argument T from the impl block by infering the
arguments there and applying them so that when we apply the generic
argument bool we dont end up in the case of missing number of generics.

Addresses Rust-GCC#1895

gcc/rust/ChangeLog:

	* typecheck/rust-hir-type-check-expr.cc (TypeCheckExpr::visit): remove hack

Signed-off-by: Philip Herron <[email protected]>
CohenArthur pushed a commit to CohenArthur/gccrs that referenced this issue Jan 17, 2024
We do extra checking after the fact here to ensure its a valid candidate
and in the case there is only one candidate lets just go for it.

Addresses Rust-GCC#1895

gcc/rust/ChangeLog:

	* backend/rust-compile-base.cc (HIRCompileBase::resolve_method_address):
	use the single candidate

Signed-off-by: Philip Herron <[email protected]>
CohenArthur pushed a commit to CohenArthur/gccrs that referenced this issue Jan 17, 2024
We can endup with duplicate symbol names for different intrinsics with our
current hash setup. This adds in the mappings and extra info to improve
hash uniqueness.

Addresses Rust-GCC#1895

gcc/rust/ChangeLog:

	* backend/rust-compile-intrinsic.cc (check_for_cached_intrinsic):
	simplify this cached intrinsic check
	* backend/rust-mangle.cc (legacy_mangle_item): use new interface
	* typecheck/rust-tyty.h: new managle helper

Signed-off-by: Philip Herron <[email protected]>
CohenArthur pushed a commit to CohenArthur/gccrs that referenced this issue Jan 17, 2024
There is a case where some generic types are holding onto inference
variable pointers directly. So this gives the backend a chance to do one
final lookup to resolve the type.

This now allows us to compile a full test case for iterators but there is
still one miscompilation in here which results in a segv on O2 and bad
result on -O0.

Addresses Rust-GCC#1895

gcc/rust/ChangeLog:

	* backend/rust-compile-type.cc (TyTyResolveCompile::visit): do a final lookup

gcc/testsuite/ChangeLog:

	* rust/compile/iterators1.rs: New test.

Signed-off-by: Philip Herron <[email protected]>
CohenArthur pushed a commit to CohenArthur/gccrs that referenced this issue Jan 17, 2024
The overflow intrinsic returns a tuple of (value, boolean) where it value
is the operator result and boolean if it overflowed or not. The intrinsic
here did not initilize the resulting tuple and therefore was creating a use
before init error resulting in garbage results

Addresses Rust-GCC#1895

gcc/rust/ChangeLog:

	* backend/rust-compile-intrinsic.cc (op_with_overflow_inner): fix use before init

Signed-off-by: Philip Herron <[email protected]>
CohenArthur pushed a commit to CohenArthur/gccrs that referenced this issue Jan 17, 2024
Ensure the uninit intrinsic does not get optimized away

Addresses Rust-GCC#1895

gcc/rust/ChangeLog:

	* backend/rust-compile-intrinsic.cc (uninit_handler): Update fndecl attributes

Signed-off-by: Philip Herron <[email protected]>
CohenArthur pushed a commit to CohenArthur/gccrs that referenced this issue Jan 17, 2024
The intrinsic move_val_init was being optimized away even at -O0 because
the function looked "pure" but this adds in the attributes to enforce that
this function has side-effects to override that bad assumption by the
middle-end.

Addresses Rust-GCC#1895

gcc/rust/ChangeLog:

	* backend/rust-compile-intrinsic.cc (move_val_init_handler): mark as side-effects

Signed-off-by: Philip Herron <[email protected]>
CohenArthur pushed a commit to CohenArthur/gccrs that referenced this issue Jan 17, 2024
We were massing the match scruitinee expression as a way to access the
result of the expression. This is wrong and needs to be stored in a
temporary otherwise it will cause the code to be regnerated for each time
it is used. This is not an issue in the case where the expression is only
used once.

Fixes Rust-GCC#1895

gcc/rust/ChangeLog:

	* backend/rust-compile-expr.cc (CompileExpr::visit): use a temp for the value

gcc/testsuite/ChangeLog:

	* rust/execute/torture/iter1.rs: New test.

Signed-off-by: Philip Herron <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
Status: Done
Development

Successfully merging a pull request may close this issue.

2 participants