GCC Rust Weekly Status Report 59

Thanks again to Open Source Security, inc and Embecosm for their ongoing support for this project.

Milestone Progress

Out GSoC project into porting C++ constexpr support has continued to progress well by finishing to support constant evaluation of builtin’s which will be needed as part of handling overflow checks, now we need to focus on cleanup and merging back to main.

On the GCC Patches v2 front we have received some feedback that each patch does not need to be buildable as an independant unit and we should focus on making it easier to review interesting pieces of code. This will make this process much easier for us.

Overall we have made quite alot of progress in different ways this week from refactoring and cleanup to bugfixing and feature work.

Completed Activities

  • Improve diagnostics when a builtin macro doesn’t exist PR1442
  • Cleanup recursive macro bug testcase PR1438
  • Initial support for rustc_constunstable attribute PR1444
  • Fix failure to type inference generic unit-structs PR1451
  • Cleanup front-end entry points PR1425
  • Refactor Intrinsics class PR1445 PR1454
  • Fix the behaviour of a transmute to doing the raw copy and not casting PR1452
  • Change CI to enforce 32bit passing tests on merge PR1453
  • Remove unused code PR1463 PR1464
  • Refactor type resolution pass visitors PR1458
  • Don’t return early on error_mark_node for call arguments PR1466
  • Add wrappingadd,sub,mul intrinsics PR1465
  • Desugar HIR::IdentifierExpr into HIR::PathInExpression PR1467
  • Remove unused target hooks info in GCC PR1471
  • Implement copy_nonoverlapping intrinsic PR1459 PR1462 PR1468

Contributors this week

Overall Task Status

CategoryLast WeekThis WeekDelta
In Progress2933+4
GitHub Issues

Test Cases

CategoryLast WeekThis WeekDelta
make check-rust


CategoryLast WeekThis WeekDelta
In Progress1417+3
GitHub Bugs

Milestone Progress

MilestoneLast WeekThis WeekDeltaStart DateCompletion DateTarget
Data Structures 1 – Core100%100%30th Nov 202027th Jan 202129th Jan 2021
Control Flow 1 – Core100%100%28th Jan 202110th Feb 202126th Feb 2021
Data Structures 2 – Generics100%100%11th Feb 202114th May 202128th May 2021
Data Structures 3 – Traits100%100%20th May 202117th Sept 202127th Aug 2021
Control Flow 2 – Pattern Matching100%100%20th Sept 20219th Dec 202129th Nov 2021
Macros and cfg expansion100100%1st Dec 202131st Mar 202228th Mar 2022
Imports and Visibility100%100%29th Mar 202213th Jul 202227th May 2022
Const Generics50%55%+5%30th May 202217th Oct 2022
Intrinsics0%0%6th Sept 202214th Nov 2022
GitHub Milestones


RiskImpact (1-3)Likelihood (0-10)Risk (I * L)Mitigation
Rust Language Changes2714Target specific Rustc version for first go
Missing GCC 13 upstream window166Merge in GCC 14 and be proactive about reviews

Planned Activities

  • Continue work on gcc patches v2
  • Continue work on const evaluation
  • Implement more compiler builtins
  • Bug fixing

Detailed changelog


This week, we worked on implementing the copy_nonoverlapping intrinsic, which is defined as such:

fn copy_nonoverlapping<T>(src: *const T, dst: *mut T, count: usize);

This intrinsic is, according to the documentation, semantically equivalent to a memcpy with the order of dst and src switched. This means that we can quite easily implement it using gcc’s __builtin_memcpy builtin. However, unlike most intrinsic functions, copy_nonoverlapping has side effects: Let’s take an example with transmute, another intrinsic working on memory:

fn transmute<T, U>(a: T) -> U;

fn main() {
    let a = 15.4f32;
    unsafe { transmute<f32, i32>(a) }; // ignore the return value

Because this transmute function is pure and does not contain any side effects (no I/O operations on memory for example), it is safe to optimize the call away. gcc takes care of this for us when performing its optimisation passes. However, the following calls were also being optimized out:

fn copy_nonoverlapping<T>(src: *const T, dst: *mut T, count: usize);

fn foo() -> i32 {
    let i = 15;
    let mut i_copy = 16;

    let i = &i as *const i32;
    let i_copy = &mut i as *mut i32;

    unsafe { copy_nonoverlapping(i, i_copy, 1) };
    // At this point, we should have `i_copy` equal 15 and return 0

    unsafe { *i_copy - 15 }

This caused assertions that this foo function would return 0 to fail, as the call to copy_nonoverlapping was simply removed from the GIMPLE entirely. It took us quite some time to fix this overzealous optimization, which ended up being caused by a flag set on the intrinsic’s block in the internal GCC represetation: Even if the block was marked as having side effects (TREE_SIDE_EFFECTS(intrinsic_fn_declaration) = 1), the fact that it was also marked as TREE_READONLY caused the optimization to happen. This was valid, as a lot of intrinsics (and all the intrinsics that we had implemented up until that point) were pure functions. We now separate between pure and impure intrinsics properly when generating their implementation.

Leave a Reply

Your email address will not be published.